290 77 18MB
English Pages 1138 [1140] Year 2012
Sign Language HSK 37
Handbücher zur Sprach- und Kommunikationswissenschaft Handbooks of Linguistics and Communication Science Manuels de linguistique et des sciences de communication Mitbegründet von Gerold Ungeheuer (†) Mitherausgegeben 1985−2001 von Hugo Steger
Herausgegeben von / Edited by / Edités par Herbert Ernst Wiegand
Band 37
De Gruyter Mouton
Sign Language An International Handbook Edited by Roland Pfau Markus Steinbach Bencie Woll
De Gruyter Mouton
ISBN 978-3-11-020421-6 e-ISBN 978-3-11-026132-5 ISSN 1861-5090 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.dnb.de. © 2012 Walter de Gruyter GmbH & Co. KG, 10785 Berlin/Boston Typesetting: META Systems GmbH, Wustermark Printing: Hubert & Co. GmbH & Co. KG, Göttingen Cover design: Martin Zech, Bremen 앝 Printed on acid-free paper 앪 Printed in Germany www.degruyter.com
Preface Five long years ago, we met to plan what looked like an impossibly ambitious project ⫺ this Handbook. Since then, we have met in Berlin, London, Amsterdam, and Frankfurt; we have exchanged hundreds of e-mails; we have read and commented on dozens of chapters ⫺ and we have found the time to write our own. The work on this Handbook has been challenging at times but it has also been inspiring and rewarding. We have learned a lot. Obviously, a project of this size would have been impossible without the help and encouragement of others. We are therefore grateful to the people and organizations that supported our work on this Handbook. First of all, we wish to express our gratitude to the section editors, who assisted us in providing feedback to authors and in getting the chapters into shape: Onno Crasborn (section I), Josep Quer (section III), Ronnie Wilbur (section IV), Trude Schermer (section VII), Adam Schembri (section VIII), and Myriam Vermeerbergen (section IX). As for the content and final shape of the chapters, we are indebted to all the publishers who granted us permission to reproduce figures, to Nancy Campbell, our meticulous, reliable, and highly efficient editorial assistant, and to Sina Schade and AnnaChristina Boell, who assisted us in the final check of consistency and formatting issues as well as in putting together the index ⫺ a truly cumbersome task. It was a true pleasure to cooperate with the professional and supportive people at Mouton de Gruyter. We are indebted to Anke Beck for sharing our enthusiasm for the project and for supporting us in getting the ball rolling. We are very grateful to Barbara Karlson for guiding and encouraging us throughout the process. Her optimism helped us to keep up our spirits whenever we felt that things were not going as smoothly as we hoped. After talking to her, things always looked much brighter. Finally, we thank Wolfgang Konwitschny for his assistance during the production phase. Bencie Woll’s work on the handbook has been supported by the Economic and Social Research Council of Great Britain (Grants RES-620-28-6001 and 6002), Deafness, Cognition and Language Research Centre (DCAL). Roland Pfau’s editorial work was facilitated thanks to a fellowship financed by the German Science Foundation (DFG) in the framework of the Lichtenberg-Kolleg at the Georg-August-University, Göttingen. Last but definitely not least, we thank all the authors who contributed to the handbook for joining us in this adventure.
Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Notational conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sign language acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.
I. 2. 3. 4.
II. 5. 6. 7. 8. 9.
Introduction · Roland Pfau, Markus Steinbach & Bencie Woll . . . . .
v ix xi 1
Phonetics, phonology, and prosody Phonetics · Onno Crasborn . . . . . . . . . . . . . . . . . . . . . . . . . . . Phonology · Diane Brentari . . . . . . . . . . . . . . . . . . . . . . . . . . Visual prosody · Wendy Sandler . . . . . . . . . . . . . . . . . . . . . . . .
4 21 55
Morphology
10. 11.
Word classes and word formation · Irit Meir . . . . . . . . . . . . . Plurality · Markus Steinbach . . . . . . . . . . . . . . . . . . . . . . . Verb agreement · Gaurav Mathur & Christian Rathmann . . . . . Classifiers · Inge Zwitserlood . . . . . . . . . . . . . . . . . . . . . . Tense, aspect, and modality · Roland Pfau, Markus Steinbach & Bencie Woll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Agreement auxiliaries · Galini Sapountzaki . . . . . . . . . . . . . . Pronouns · Kearsy Cormier . . . . . . . . . . . . . . . . . . . . . . . .
III.
Syntax
12. 13. 14. 15. 16. 17.
Word order · Lorraine Leeson & John Saeed . . The noun phrase · Carol Neidle & Joan Nash . . Sentence types · Carlo Cecchetto . . . . . . . . . . Negation · Josep Quer . . . . . . . . . . . . . . . . . Coordination and subordination · Gladys Tang & Utterance reports and constructed action · Diane
IV.
Semantics and pragmatics
18. 19. 20. 21. 22.
Iconicity and metaphors · Sarah F. Taub . . . . . . . . . . . . . . . . . . . Use of sign space · Pamela Perniss . . . . . . . . . . . . . . . . . . . . . . Lexical semantics: Semantic fields and lexical aspect · Donovan Grose Information structure · Ronnie B. Wilbur . . . . . . . . . . . . . . . . . . Communicative interaction · Anne Baker & Beppie van den Bogaerde . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prudence Lau Lillo-Martin .
. . . .
. . . .
. . . .
77 112 136 158
. . . . . . . . .
186 204 227
. . . . . .
. . . . . .
245 265 292 316 340 365
388 412 432 462
. . . . . .
489
viii
Contents
V.
Communication in the visual modality
23. 24. 25. 26. 27.
Manual communication systems: evolution and variation · Pfau . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shared sign languages · Victoria Nyst . . . . . . . . . . . . . . . Language and modality · Richard P. Meier . . . . . . . . . . . . Homesign: gesture to language · Susan Goldin-Meadow . . . . Gesture · Aslı Özyürek . . . . . . . . . . . . . . . . . . . . . . . .
VI.
Psycholinguistics and neurolinguistics
28. 29. 30. 31. 32.
Acquisition · Deborah Chen Pichler . . . . . . . . . . . . Processing · Matthew W. G. Dye . . . . . . . . . . . . . . Production · Annette Hohenberger & Helen Leuninger Neurolinguistics · David Corina & Nicole Spotswood . Atypical signing · Bencie Woll . . . . . . . . . . . . . . . .
Roland . . . . . . . . . . . . . . . . . . . . . . . . .
513 552 574 601 626
. . . . .
. . . . .
647 687 711 739 762
& . . . . . . . . . .
788 816 841 862 889
38. 39. 40. 41.
History of sign languages and sign language linguistics · Susan McBurney Deaf education and bilingualism · Carolina Plaza Pust . . . . . . . . . . Interpreting · Christopher Stone . . . . . . . . . . . . . . . . . . . . . . . Poetry · Rachel Sutton-Spence . . . . . . . . . . . . . . . . . . . . . . . . .
909 949 980 998
IX.
Handling sign language data
42. 43. 44.
Data collection · Mieke Van Herreweghe & Myriam Vermeerbergen Transcription · Nancy Frishberg, Nini Hoiting & Dan I. Slobin . . . . Computer modelling · Eva Sáfár & John Glauert . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
VII. Variation and change 33. 34. 35. 36. 37.
Sociolinguistic aspects of variation and change · Adam Schembri Trevor Johnston . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lexicalization and grammaticalization · Terry Janzen . . . . . . . . . Language contact and borrowing · Robert Adam . . . . . . . . . . . Language emergence and creolisation · Dany Adone . . . . . . . . Language planning · Trude Schermer . . . . . . . . . . . . . . . . . .
VIII. Applied issues
1023 1045 1075
Indexes Index of subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Index of sign languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Index of spoken languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1103 1120 1125
Notational conventions As is common convention in the sign language literature, signs are glossed in small caps (sign) in the examples as well as in the text. Glosses are usually in English, irrespective of the sign language, except for examples quoted from other sources where these are not in English (see chapter 43 for a detailed discussion of the challenges of sign language transcription). The acronym for the respective sign language is always given at the end of the gloss line (see next section for a list of the acronyms used in this handbook). For illustration, consider the following examples from Sign Language of the Netherlands (NGT) and German Sign Language (DGS). y/n
(1)
index2 h-a-n-s index3a bookCC 2give:cl3a ‘Will you give Hans the books?’
[NGT]
(2)
two-days-ago monk^boss school index3a visit3a ‘Two days ago, the abbot visited the school.’
[DGS]
With respect to manual signs, the following notation conventions are used. index3/ix3 pointing sign used in pronominalization (e.g. index2 in (1)) and for localizing non-present referents and locations in the signing space (e.g. index3a in (1) and (2)). The subscript numbers refer to points in the signing space and are not necessarily meant to reflect person distinctions: 1 = towards signer’s chest; 2 = towards addressee; 3a/3b = towards ipsi- or contralateral side of the signing space. verb sign moving in space from one location to another; in (1), for example, 1sign3a the verb sign give moves from the locus of the addressee to the locus introduced for the non-present referent ‘h-a-n-s’. s-i-g-n represents a fingerspelled sign. sign^sign indicates either the combination of two signs in a compound, e.g. monk^boss ‘abbot’ in (2), or a sign plus affix/clitic combination (e.g. know^not); in both types of combinations, characteristic assimilation and/ or reduction processes may apply. sign-sign indicates that two or more words are needed to gloss a single sign (e.g. two-days-ago in (2)). signCC indicates reduplication of a sign to express grammatical features such as plurality (e.g. bookCC in (1)) or aspect (e.g. iterative or durative aspect). cl indicates the use of a classifier handshape that may combine with verbs of movement and location (e.g. give in (1)); throughout the handbook, different conventions are used for classifiers: the cl may be further specified by a letter of the manual alphabet (e.g. cl:c) or by a subscript specifying either a shape characteristic or the entity that is classified (e.g. clround or clcar). Lines above the glosses (as in (1)) indicate the scope, that is, the onset and offset of a particular non-manual marker, be it a lexical, a morphological, a syntactic, or a pro-
x
Notational conventions sodic marker. Below we provide a list of the most common markers. Note that some of the abbreviations used refer to the function of the non-manual marker (e.g. ‘top’ and ‘neg’) while others refer to its form (e.g. ‘re’ and ‘hs’). When necessary, additional markers will be introduced in the respective chapters. /xxx/ xxx top wh y/n rel neg hs hn re
lexical marker: a mouthing (silent articulation of (part of) a spoken word) associated with a sign; lexical or morphological marker: a mouth gesture associated with a sign; syntactic topic marker; syntactic wh-question marker; syntactic yes/no-question marker (as in (1)); syntactic relative clause marker; syntactic negation marker; headshake; headnod; raised eyebrows.
As for handshapes, whenever possible, the Tang handshape font is used (http:// www.cuhk.edu.hk/cslds), instead of labels relating to manual alphabet or counting systems, because the latter may differ from sign language to sign language (e.g. T-hand is different in ASL, NGT, and DGS); that is, we use ‘:-hand’ instead of ‘C-hand’, etc. The usual convention concerning the use of upper case D in Deaf vs. deaf is respected. Deaf with an upper-case D refers to (members of) linguistic communities characterized by the use of sign languages. Lower case deaf refers to an individual’s audiological status.
Sign language acronyms Below we provide a list of sign language acronyms that are used throughout the handbook. Within every chapter, acronyms will also be introduced when a particular sign language is mentioned for the first time. For some sign languages, alternative acronyms exist in the sign language literature (for instance, ISL is commonly used for both Israeli Sign Language and Irish Sign Language, and Libras for Brazilian Sign Language). Note that some of the acronyms listed below are based on the name of the sign language in the respective country; these names are given in brackets in italics. ABSL AdaSL ASL Auslan BSL CisSL CSL DGS DSL FinSL GSL HKSL HZJ IPSL IS Irish SL Israeli SL ISN KK KSL LIL LIS LIU LSA LSB LSC LSE LSF LSQ MSL NCDSL NGT NS NSL NZSL ÖGS
Al-Sayyid Bedouin Sign Language (Israel) Adamorobe Sign Language (Ghana) American Sign Language Australian Sign Language British Sign Language Cistercian Sign Language Chinese Sign Language German Sign Language (Deutsche Gebärdensprache) Danish Sign Language Finnish Sign Language Greek Sign Language Hong Kong Sign Language Croatian Sign Language (Hrvatski Znakovni Jezik) Indopakistani Sign Language International Sign Irish Sign Language Israeli Sign Language Nicaraguan Sign Language (Idioma de Señas Nicaragüense) Sign Language of Desa Kolok, Bali (Kata Kolok) Korean Sign Language Lebanese Sign Language (Lughat il-Ishaarah il-Lubnaniah) Italian Sign Language (Lingua Italiana dei Segni) Jordanian Sign Language (Lughat il-Ishaara il-Urdunia) Argentine Sign Language (Lengua de Señas Argentina) Brazilian Sign Language (Língua de Sinais Brasileira) Catalan Sign Language (Llengua de Signes Catalana) Spanish Sign Language (Lengua de Señas Espanõla) French Sign Language (Langue des Signes Française) Quebec Sign Language (Langue des Signes Québécoise) Mauritian Sign Language North Central Desert Sign Language (Australia) Sign Language of the Netherlands (Nederlandse Gebarentaal) Japanese Sign Language (Nihon Syuwa) Norwegian Sign Language New Zealand Sign Language Austrian Sign Language (Österreichische Gebärdensprache)
xii
Sign language acronyms PISL RSL SASL SGSL SKSL SSL TİD TSL VGT WSL YSL
Plains Indian Sign Language (North America) Providence Island Sign Language Russian Sign Language South African Sign Language Swiss-German Sign Language South Korean Sign Language Swedish Sign Language Turkish Sign Language (Türk İşaret Dili) Taiwan Sign Language Flemish Sign Language (Vlaamse Gebarentaal) Warlpiri Sign Language (Australia) Yolngu Sign Language (Australia)
1. Introduction 1. 2. 3. 4.
The impact of sign language research on linguistics Why a handbook on sign language linguistics is timely and important Structure of the handbook Literature
1. The impact of sign language research on linguistics Before the beginning of sign language linguistics, sign languages were regarded as exemplifying a primitive universal way of communicating through gestures. Early sign linguistic research from the 1960s onward emphasized the equivalences between sign languages and spoken languages and the recognition of sign languages as full, complex, independent human languages. Contemporary sign linguistics now explores the similarities and differences between different sign languages, and between sign languages and spoken languages. This move has offered a new window on human language but has also posed challenges to linguistics. While it is uncommon to find an introductory text on linguistics which does not include some mention of sign language, and sign language linguistics is increasingly offered as a subject within linguistics departments, instead of being restricted to departments of speech and language pathology, there is still great scope for linguists to recognize that sign language linguistics provides a unique means of exploring the most fundamental questions about human language: the role of modality in shaping language, the nature of linguistic universals approached cross-modally, the functions of iconicity and arbitrariness in language, and the relationship of language and gesture. The answers to these questions are not only of importance within the field of linguistics but also to neuroscience, psychology, the social sciences, and to the broadest understanding of human communication. It is in this spirit that this Handbook has been created.
2. Why a handbook on sign language linguistics is timely and important The sign language linguistics scene has been very active in recent years. First of all, sign language linguists have contributed (and continue to contribute) to various handbooks, addressing topic from a sign language perspective and thus familiarizing a broader audience with aspects of sign language research and structure; e.g. linguistics in general (Sandler/Lillo-Martin 2001), cognitive linguistics (Wilcox 2007), linguistic analysis (Wilcox/Wilcox 2010), phonology (Brentari 2011), grammaticalization (Pfau/Steinbach 2011), and information structure (Kimmelman/Pfau forthcoming). A recent handbook that focuses entirely on sign languages is Brentari (2010); this handbook covers three broad areas: transmission, structure, and variation and change. There have also been several comprehensive introductory textbooks on single sign languages ⫺ e.g.
2
1. Introduction British Sign Language (Sutton-Spence/Woll 1999), Australian Sign Language (Johnston/Schembri 2007), and Israeli Sign Language (Meir/Sandler 2008) ⫺ which discuss some of the issues also addressed in the present handbook. The focus of these books, however, is clearly on structural, and to a lesser extent, historical and sociolinguistic, aspects of the respective sign language. A textbook that focuses on structural and theoretical aspects of sign language grammar, discussing examples from different sign languages (mostly American Sign Language and Israeli Sign Language), is Sandler and Lillo-Martin (2006). The central aim of that book is to scrutinize the existence of alleged linguistic universals in the light of languages in the visual-gestural modality. The time is thus ripe for a handbook on sign language linguistics that addresses a wider range of topics from cross-linguistic, cross-modal, and theoretical perspectives. It is these features which distinguish the present handbook from previous publications, making it a unique source of information: First, it covers all areas of contemporary linguistic research. Second, given that sign language typology is a fascinating and promising young research field, authors have been encouraged to address the topic of their chapter from a broad typological perspective, including ⫺ wherever possible ⫺ data from different sign languages, thus also illustrating the range of variation attested among sign languages. Third, where appropriate, the contributions also sketch theoretical analyses for the phenomena under discussion, providing a neutral survey of existing, sometimes conflicting, approaches. Therefore, this handbook is of relevance to general linguistics, that is, it is designed not only for linguists researching sign language but also for linguists researching spoken language. Examples are provided from a large number of sign languages covering all regions of the world, illustrating the similarities and differences among sign languages and between sign languages and spoken languages. The book is also of interest to those working in related fields such as psycholinguistics and sociolinguistics and to those in applied fields, such as language learning and neuropsychology.
3. Structure of the handbook The handbook consists of 44 chapters organized in nine sections, each of which has been supervised by a responsible section editor. Although each chapter deals with a specific topic, several topics make an appearance in more than one chapter. The first four sections of the handbook (sections I⫺IV) are dedicated to the core modules of grammar (phonetics, phonology, morphology, syntax, semantics, and pragmatics). The fifth section deals with issues of sign language evolution and typology, including a discussion of the similarities and differences between signing and gesturing. Psychoand neurolinguistic aspects of sign languages are discussed in section VI. Section VII addresses sociolinguistic variation and language change. Section VIII discusses a number of applied issues in sign language linguistics such as education, interpreting, and sign language poetry. Finally, section IX deals with questions of sign language documentation, transcription, and computer modelling. Despite the broad coverage, a few topics do not receive a detailed discussion in the handbook; among these are topics such as Deaf culture, literacy, educational practices, mental health, sign language assessment, ethical issues, and cochlear implants. We refer
1. Introduction
3
the reader to Marschark and Spencer (2003, 2010), two comprehensive handbooks that address these and many other issues of an applied nature. We hope ⫺ whatever one’s background ⫺ the reader will be drawn along new paths of interest and discovery.
4. Literature Brentari, Diane (ed.) 2010 Sign Languages (Cambridge Language Surveys). Cambridge: Cambridge University Press. Brentari, Diane 2011 Sign Language Phonology. In: Goldsmith, John A./Riggle, Jason/Yu, Alan C. L. (eds.), The Handbook of Phonological Theory (2nd Revised Edition). Oxford: Blackwell, 691⫺721. Johnston, Trevor/Schembri, Adam 2007 Australian Sign Language. An Introduction to Sign Language Linguistics. Cambridge: Cambridge University Press. Kimmelman, Vadim/Pfau, Roland forthcoming Information Structure in Sign Languages. In: Féry, Caroline/Ishihara, Shinichiro (eds.), The Oxford Handbook of Information Structure. Oxford: Oxford University Press. Marschark, Mark/Spencer, Patricia E. (eds.) 2003 Oxford Handbook of Deaf Studies, Language, and Education. Oxford: Oxford University Press. Marschark, Mark/Spencer, Patricia E. (eds.) 2010 Oxford Handbook of Deaf Studies, Language, and Education, Volume 2. Oxford: Oxford University Press. Meir, Irit/Sandler, Wendy 2008 A Language in Space. The Story of Israeli Sign Language. New York: Lawrence Erlbaum. Pfau, Roland/Steinbach, Markus 2011 Grammaticalization in Sign Languages. In: Narrog, Heiko/Heine, Bernd (eds.), The Oxford Handbook of Grammaticalization. Oxford: Oxford University Press, 683⫺695. Sandler, Wendy/Lillo-Martin, Diane 2001 Natural Sign Languages. In: Aronoff, Mark/John Rees-Miller (eds.), The Handbook of Linguistics. Oxford: Blackwell, 533⫺562. Sandler, Wendy/Lillo-Martin, Diane 2006 Sign Languages and Linguistic Universals. Cambridge: Cambridge University Press. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge University Press. Wilcox, Sherman 2007 Signed Languages. In: Geeraerts, Dirk/Cuyckens, Herbert (eds.), The Oxford Handbook of Cognitive Linguistics. Oxford: Oxford University Press. 1113⫺1136. Wilcox, Sherman/Wilcox, Phyllis P. 2010 The Analysis of Signed Languages. In: Heine, Bernd/Narrog, Heiko (eds.), The Oxford Handbook of Linguistic Analysis (Oxford Handbooks in Linguistics). Oxford: Oxford University Press, 739⫺760.
Roland Pfau, Amsterdam (The Netherlands) Markus Steinbach, Göttingen (Germany) Bencie Woll, London (United Kingdom)
I. Phonetics, phonology, and prosody 2. Phonetics 1. 2. 3. 4. 5. 6. 7.
Introduction The modality difference Phonetics vs. phonology Articulation Phonetic variation Conclusion Literature
Abstract Sign and spoken languages differ primarily in their perceptual channel, vision vs. audition. This ‘modality difference’ has an effect on the structure of sign languages throughout the grammar, as is discussed in other chapters in this volume. Phonetic studies of sign languages typically focus on the articulation of signs. The arms, hands, and fingers form very complex articulators that allow for many different articulations for any given phonological specification for hand configuration, movement, and location. Indeed phonetic variation in sign language articulation is abundant, and in this respect, too, sign languages resemble spoken languages.
1. Introduction Sign languages are produced by body movements that are perceived visually, while spoken languages are produced by vocal articulation and perceived by the ear. This most striking difference between sign and spoken languages is termed the ‘modality difference’. It refers to a difference in communication channel that is often considered to be the ultimate cause for structural differences between spoken and sign languages. Since auditory perception is better targeted at processing small temporal detail than visual perception, and since the manual articulators in signing move slower than the oral articulators in speech, one would for example predict the richness of simultaneous information in sign languages (Vermeerbergen/Leeson/Crasborn 2006). In all, this chapter aims to characterise the area of sign language phonetics rather than to provide an exhaustive overview of the studies that have been done. The focus will be on the manual component in terms of articulation and phonetic variation. Despite the large importance that is often (intuitively) attributed to the phonetic difference between sign and speech, relatively little research within the field of sign language studies has focused on the area of sign language phonetics, especially in comparison to the phonological analysis of sign languages. This is illustrated by the fact that none of the textbooks on sign language that have appeared in recent years includes ‘phonetics’
2. Phonetics as a keyword (e.g., Boyes Braem 1995; Sutton-Spence/Woll 1999; Emmorey 2002; Sandler/Lillo-Martin 2006; Johnston/Schembri 2007; Meir/Sandler 2008). In section 2, the modality difference is discussed in further detail. Section 3 will then discuss the relation between phonetics and phonology in sign languages, as it may not be self-evident how a phonetic and a phonological level of analysis can be distinguished in a visual language. Section 4 discusses articulation, and section 5 takes a look at phonetic variation. (Note that perception studies are also discussed in section F of the handbook, see especially chapter 29 on processing. The phonetic transcription and notation of sign languages are covered in chapter 43.)
2. The modality difference It is attractive to see modality as a black-and-white distinction in channel between spoken language and sign language. One is auditory, the other visual. The deep embedding of writing systems and written culture in many civilisations has perhaps contributed to our view of spoken language as a string of sounds, downplaying the presence of non-verbal communication and visual communication more generally among hearing people (Olson 1994). Yet there is growing evidence for the multimodality of spoken language communication among hearing people. It is clear that visual aspects of communication among hearing people can be complementary to auditory signals. For example, emotional state is often visible in the facial expression while someone speaks (Ekman 1993), and many interactional cues are expressed by a wide variety of head movements (McClave 2000). Manual gestures are known to serve many functions that complement the content of the spoken utterances (McNeill 1992; Kendon 2004). Moreover, there is also evidence that speech itself is not only perceived auditorily but also visually. McGurk and MacDonald (1976) showed that the visible state of the face can influence the auditory perception of consonants. More recently, Swerts and Krahmer (2008) demonstrated that the perception of manual beat gestures are interpreted as increased prominence of the simultaneously uttered spoken word. However, while hearing people are very skilled at perceiving speech without looking at the speaker (as when communicating by telephone), they are very bad at speech-reading without any acoustic input (Woodward/Barber 1960). Only a small subset of the articulatory features of speech sounds can actually be seen (mainly lip rounding and opening, labiodental contact, and jaw height), while others such as the state of the glottis, velum lowering, and tongue dorsum height are invisible. Thus, for the segmental or syllabic level in speech, it remains fair to say that speech primarily makes use of the acousticauditory modality, while there is some visual input as well. So as a starting point, it should be emphasised that the ‘modality difference’ appears not to be a black-and-white contrast in phonetic channel. While sign languages are exclusively perceived visually by their core users, deaf people, spoken languages are perceived both auditorily and visually. Ongoing research on spoken language communication is exploring the role of visual communication among hearing people more and more, including the role of gestures and facial expressions that are exclusively expressed visually. Hearing users of sign languages can in principle also hear some of the sounds that are made, for instance by the lips or the hands contacting each other, yet
5
6
I. Phonetics, phonology, and prosody ACTION Hearing communication bodily actions Deaf communication bodily actions
SIGNAL
PERCEPTION
/
sound light
/ /
auditory perception visual perception
/
light
/
visual perception
Fig. 2.1: The modelling difference
this is unlikely to have a substantial phonetic impact on the linguistic structure of sign languages given the fact that the core users of sign languages only have little residual hearing, if any. The modality difference is summarised in Figure 2.1. Where researchers have made significant progress in the acoustic analysis of the speech signal and in the study of auditory perception, we have very little knowledge of the signal and perception components of the communication chain of sign languages. Yet these are important to study, as general human perceptual abilities form the framework within which linguistic perception takes place. The phonetic research that has been done has focused almost exclusively on the articulation of sign languages (but see Bosworth 2003 for a notable exception). Therefore this chapter will also be primarily devoted to sign language articulation. The reason for this may be that visual perception is extremely complex. While there are only a few parameters of a small section of the electromagnetic spectrum that the human visual system can exploit (luminance and wavelength), these parameters constitute the input to a large array of light-sensitive tissue (the retina) of the two eyes, which themselves move with our head and body movements and which can also move independently (together constituting ‘eye gaze’). The human brain processes this very complex input in highly intricate ways to give us the conscious impression that we see three-dimensional coloured objects moving through space over time (Zeki 1993; Palmer 1999). At a high level of processing, there are abstract forms that the brain can recognise. There have been very few if any sign language studies that have aimed to describe the phonetic form of signs in such abstract visual categories (see Crasborn 2001, 2003 for attempts in that direction). It is clearly an underexplored area in the study of sign languages. This may be due to the lack of a specialised field of ‘body movement perception’ in perceptual psychology that linguists can readily borrow a descriptive toolkit from, whereas anatomical and physiological terminology is gratefully borrowed from the biological and medical sciences when talking about the articulation of finger movements, for example. Two generalisations about visual perception have made their way into the sign language literature in attempts to directly link properties of visual perception to the structure of sign languages. First, Siple (1978) noted that the visual field can be divided into a ‘centre’ and a ‘periphery’. The centre is a small area in which fine spatial detail is best processed, while in the relatively large periphery it is motion rather than fine details that are best perceived. Siple argued that native signers perceiving ASL focus their eye gaze around the chin, and do not move their gaze around to follow the movements of the hands, for example. Thus, someone looking at signing would see more details of handshape, orientation, and location for signs near the face than for signs made lower on the body or in front of the trunk. This distinction might then
2. Phonetics provide an explanatory basis for finer phonological location distinctions near the face area as compared to the upper body area. Irrespective of the data on phonological location distinctions, this hypothesis is hard to evaluate since the face area also includes many visual landmarks that might also help perceivers distinguish small phonetic differences in place of articulation and categorise these as phonologically distinct locations. Since 1978, very few if any eye tracking studies have specifically evaluated to what extent eye gaze is actually relatively immobile and focused on the chin in sign language perception. Also, we do not know whether this differs for different sign languages, nor whether there are differences in the perceptual behaviour of early versus late sign language learners. A related hypothesis that has not yet been tested is that there are more and finer handshape distinctions in the lexicon of any sign language for locations at the face than for lower locations. The second generalisation concerns the temporal processing of sound versus light. Auditory perception is much better suited to distinguishing fine temporal patterns than visual perception. This general difference is sometimes correlated to the sequential structure found in spoken language phonology, where a sequence of segments together can constitute one syllable, and in turn sequences of syllables can be the form of single morphemes. In sign language, morphemes typically do not show such temporal complexity (van der Kooij/Crasborn 2008). The phonological structure of signs is discussed in the next chapter in this section. While the perceptual functional explanation for the difference in phonological structure may well be valid, there is an equally plausible explanation in terms of articulatory differences: the large difference in size between the arms, hands, and fingers that are mostly involved in the realisation of lexical items and the oral articulators involved in the production of speech sounds leads to a difference in the speed of movement given, assuming a constant energy expense. The mouth, lips, and tongue are faster than the fingers and hands, and we thus correctly predict more fine-grained temporal articulations in speech than in sign. As for the first generalisation about the influence of language modality on structure, very few if any concrete studies have been done in this area, for example allowing us to disentangle articulatory and perceptual influences.
3. Phonetics vs. phonology The phonetic study of sign languages includes the low-level production and perception of manual and non-manual signals. It is much less evident how such phonetic analysis of language relates to the phonological structure. As chapter 3 on phonology makes clear, we have a good understanding of the phonological characteristics of several sign languages and of sign languages in general. However, one cannot directly observe the categorical properties and structures in sign language phonology: they have to be inferred from the gradient phonetic form. Perhaps the impression that we can see the articulators in sign languages has made it self-evident what the phonological form looks like, and in that way reduced the need for an accurate phonetic description. The first description of the manual form of signs that was introduced by Stokoe (1960) in his groundbreaking work was clearly targeted at the lexical phonological level. It used explicit articulatory terms in the description of the orientation of the
7
8
I. Phonetics, phonology, and prosody hand, even though it aimed to characterise the distinctions within this ‘minor’ parameter at a phonological level. Orientation was characterised in terms of ‘prone’ and ‘supine’, referring to the rotation of the forearm around its length axis. There has never been a phonetic variant of Stokoe’s system that has been commonly used as a phonetic notation system. Phonetic notation systems such as HamNoSys (http://www.signlang.uni-hamburg.de/projects/hamnosys.html) are sometimes used in lexicography. HamNoSys itself is based on the linguistic analyses initiated by Stokoe, describing the handshape, location, and movement for a manual sign, but it allows for the transcription of finer phonetic detail than a phonological characterisation would require, and like the International Phonetic Alphabet (IPA) for spoken languages it is not designed for one specific language (see chapter 43 for details). Another ongoing effort to describe phonetic events insign languages aims to describe American Sign Language (ASL) at a fine articulatory level of detail, yet still incorporates categories (similar to ‘movements’ and ‘holds’) that cannot be directly observed in a video recording of sign but that derive from a specific phonological analysis (Johnson/Liddell, 2010, 2011a,b, to appear). What we consider to be ‘phonetic’ and ‘phonological’ descriptions and how these two interact depends on our model of these different components of language form. Different types of spoken language models have been applied to sign languages, from rule-based formalisms of the SPE (Chomsky/Halle 1957) type to modern constraintbased models (e.g., Sandler 1989; Corina/Sandler 1993; van der Hulst 1993; Brentari 1998). Irrespective of the specific model that is used, such models can help us to get a better grip on what we talk about when we describe a phonetic form in sign language. As an example, Figure 2.2 presents an overview of the Functional Phonology model developed by Boersma (1998, 2007) for spoken languages that was adopted by Crasborn (2001) for the description of a sign language.
Fig. 2.2: The Functional Phonology model
For example, take the sign proof from Sign Language of the Netherlands (NGT) as illustrated in Figure 2.3. The underlying form of this sign specifies that the dominant hand touches the non-dominant hand repeatedly, and that the shape of the two hands is flat with all fingers selected. By default, signs that are specified for a location on the
2. Phonetics non-dominant hand are realised with both hands in the centre of neutral space. This predictable aspect of the phonological form is added to form the phonological surface representation in the phonetic implementation, and it may be impacted by the phonetic context, showing coarticulation effects (Ormel/Crasborn/van der Kooij 2012). Likewise, the phonological characterisation of the form of signs does not contain any details of how the movement is executed: whether it is the elbow, wrist, or even the fingers that extend to realise the contact with the other hand, or both, is left to the phonetic implementation. It is not fully predictable by phonological rules alone as the phonetic form of a word or sign is also determined by all kinds of sociolinguistic and practical factors (see Crasborn 2001 for extensive discussion). In the instance of the sign proof in Figure 2.3, all three joint types appear to participate in the downward movement. This specific type of phonetic variation will be further discussed in section 5.5.
Fig. 2.3: proof (NGT)
In the Functional Phonology model, the form of signs that is stored in the lexicon is a perceptual target, whereas the concrete phonetic realisation at a given point in time needs to be characterised at both an articulatory and a perceptual level in order to be properly understood. Most phonological models of sign languages aim for the characterisation of the underlying form of signs, yet this can be viewed as clearly distinct from the phonetic form that is generated by the phonetic implementation in the model above. Section 5 of this chapter will discuss studies on phonetic variation, and we will see how these different articulations (phonetic forms) relate to a single underlying representation. First, section 4 will discuss in some detail how the articulation of signs can be described.
4. Articulation 4.1. Levels of description The articulation of manual signs can be characterised in different ways. Figure 2.4a presents an overview of the parts of the upper limb. We can describe the location and
9
10
I. Phonetics, phonology, and prosody orientation of the various body parts (fingers, whole hand, forearm, upper arm) in space or relative to the upper body or head, for example. In the sign language literature, we mostly find descriptions of the whole hand or of one or more of the fingers with respect to a body location or in the ‘neutral space’ in front of the body. Such descriptions rarely describe in detail the location and rotation of the upper arm, for example. It is the ‘distal end’ of the articulator that realises the phonologically specified values for location and movement in almost all lexical items in sign languages studied to date. The anatomical terms ‘distal’ and ‘proximal’ refer to the relative location with respect to the torso, following the line of the arm and hand (see Figure 2.4b). An additional pair of terms displayed in Figure 2.4b is ‘ipsilateral ⫺ contralateral’. These are similar to ‘left ⫺ right’, yet take the side of the active articulator as a basis: ipsilat-
a. Body parts and joints
c. Sides of the hand
b. Location terms
2. Phonetics
d. Rotation states of the forearm Fig. 2.4: Terminology used for the description of manual signs
eral refers to the side of the articulator in question, whereas contralateral refers to the opposite side. As such, these terms are better suited to describe the bilaterally symmetric human body than the terms ‘left ⫺ right’ are. Alternatively, one can also look at manual articulations by focusing on the state of the different joints, from the shoulder to the most distal finger joints. For joints like the elbow that have only one degree of freedom, this is very straightforward, while other joints are more complex. The wrist has two degrees of freedom in its movement (flexion-extension and lateral flexion-extension), while the shoulder not only allows movement of the upper arm at the upper body (three degrees of freedom: flexion in two dimensions plus rotation about the upper arm axis), but also shows restricted movement of the shoulder blade and clavicle with respect to the torso, affecting the whole arm plus the hand. In addition to describing articulation in terms of body part states or joint states, one can look at the muscles involved in movements of the arms and hands. There are a large number of muscles involved in the articulation of each sign, and as they are not directly visible, knowledge about the anatomy and physiology of the hand is needed to create such descriptions. Several sign language studies have focused at this level of description in an attempt to phonetically distinguish easy from hard articulations; these will be discussed in section 4.2. The phonological description of signs typically centres on the hand: its shape, rotation in space, location, and movement are represented in the lexicon. Such a specification does not contain a concrete articulatory specification, irrespective of the level of description. In terms of the model outlined in Figure 2.2, a phonetic implementation is needed to generate a phonetic form from a phonological surface form. Take for example the NGT sign india. Its phonological specification includes the location forehead, the extended thumb as the selected finger, and a rotation movement of the thumb at the forehead. As the state of more proximal joints will influence the location of the
11
12
I. Phonetics, phonology, and prosody end of the extremity, the state of the upper body will also influence the location of the fingertips. Thus, bringing the tip of the thumb to the forehead (in other words, articulating the phonological location) does not only involve a specific state of the shoulder, elbow, wrist, and thumb joints, but needs to take into account the current state of the upper body and head. When the head is turned rightwards, the hand will also need to be moved rightwards, for example by rotating the upper arm outwards. Thus, while the phonological specification of a sign contains global phonetic information on the realisation of that sign, it is quite different from its actual articulation in a given instance. Although this section aimed to characterise the articulation of manual parts of signs, a short note on non-manual articulations is in place. The articulations of the jaw, head, and upper body can be described in ways similar to those of the arms and hands. Facial articulations are different in that other than the lower jaw there are no bones underlying the skin of the face that can move. Rather, what we see when we describe facial expressions are the impact that the muscles have on the skin of the face. Psychologist Paul Ekman and colleagues have developed a notation system to analyse these articulations. The system emphasises that there is no one-to-one mapping between muscle actions and visible changes in the skin. In other words, we cannot directly see the muscles, but only their effect on the facial skin. The FACS coding system uses the term ‘action unit’ for each type of articulation; each action unit can be the result of the action of one or more muscles (Ekman/Friesen/Hagen 2002).
4.2. Ease of articulation In an effort to explain the relative frequency of some forms over others in the lexicon of sign languages, among other things, several studies have looked at the anatomy and physiology of the upper extremity. In particular, the muscles that are used in the articulation of aspects of signs have been discussed in a number of studies. Mandel (1979) looked at the extensor muscles of the fingers, showing that these are not long enough to fully flex the fingers at all joints when the wrist is also maximally flexed. This physiological fact has an impact on the possible movements of the wrist and fingers. One can easily test this by holding the forearm horizontal and pronated, and relaxing both wrist and finger muscles. When one then quickly forms a fist, the wrist automatically extends. Similarly, when the wrist quickly flexes from a neutral or extended state, the fingers automatically extend to accommodate the new position of the wrist. The slower these movements are performed, the better they can be controlled, although in the end the anatomy restricts the possible range of movement and the resulting states of the different joints in combination. At normal signing speed, we do expect to find a certain influence of this ‘knuckle-wrist connection’, as Mandel called it: closing movements of all fingers are likely to be combined with wrist extension, which in turn leads to a dorsal movement of the hand. Mandel argues that these dorsal movements are typically enhanced as path movements of the whole hand through space in ASL; conversely, opening movements of the fingers tend to be combined with path movements in the direction of the palmar surface of the hand. Thus, while phonologically, path movement direction and handshape change are independent, there is a phonetic effect that relates the two. This is illustrated by the two configura-
2. Phonetics
(a) Fingers flexed, wrist hyperextended (b) Fingers extended, wrist flexed Fig. 2.5: The relation between finger extension and hand position in two articulatory configurations
tions in Figure 2.5: when all fingers are closed (2.5a), the wrist is hyperextended; by consequence, the hand appears more ‘backwards’ than when all fingers are open and the wrist can flex (2.5b). The literature on ASL contains several studies on handshape that make reference to the articulation of the fingers, arguing that some handshapes are easier to articulate than others (Mandel 1981; Woodward 1982, 1985, 1987; Ann 1993). Patterns of frequency of occurrence ⫺ both within the ASL lexicon and in comparison to the lexicon of other sign languages ⫺ were attributed as evidence for the ‘unmarked’ status of handshapes with only the index, thumb, or little finger extended, or with all fingers extended. Supporting evidence came from the order of acquisition of such handshapes. Such distributional (phonological) patterns were related to articulatory (phonetic) properties. Ann (1993, 2008) was the first to perform a detailed physiological study of the articulation of all handshapes. She argued that many of the patterns that were found could be explained by reference to the anatomy and physiology of the hand. For instance, both the index finger and the little finger have a separate extensor muscle and tendon allowing them to extend independently (viz. the extensor indicis proprius and the extensor digiti minimi). The middle and ring fingers do not: they can only be extended on their own by employing a shared extensor muscle for all four fingers (the extensor digitorum communis) while other muscles simultaneously flex the other fingers. A different articulatory constraint appears to play a role in the formation of some morphological forms. Mathur and Rathmann (2001) argued that the range of motion of the arm joints restricts the inflection of some verbs in sign languages. Inflections for first person plural objects (as in ‘send us’) do not occur if their articulation requires extreme flexion or rotation at multiple joints. These articulations are required in combining an arc movement (part of the first person plural morpheme) with the lexical orientation and location specifications of verbs such as invite in ASL and German Sign Language (DGS) or pay in Australian Sign Language (Auslan).
13
14
I. Phonetics, phonology, and prosody
5. Phonetic variation 5.1. Introduction Studies on the articulation of signs as described above form an important contribution to our phonetic understanding of signs. In most of the studies that were done until now, this articulatory knowledge was related directly to patterns observed in the lexicon. As the model of the relation between phonetics and phonology in Figure 2.2 makes clear, this is a rather large step to make. As the lexicon contains abstract phonological representations that are more likely to be perceptual than articulatory, it is not always selfevident how a sign (or even the handshape of a sign) can be articulated and whether there is a prototypical articulation of a sign that can be taken as a reference point for studies on markedness.
5.2. Handedness The phonetic realisation of signs, just as for words in spoken language, is in fact highly variable. In other words, there are many different phonetic forms corresponding to a single phonological underlying form. One obvious aspect that leads to variation is handedness: whether a signer is left-dominant or right-dominant for non-sign tasks is the primary factor in determining whether one-handed signs are typically realised with the left or right hand (Bonvillian/Orlansky/Garland 1982; Sáfár/Crasborn/Ormel 2010). There is anecdotal evidence that L2 learners may find left-handed signers more difficult to perceive.
5.3. Hand height The height of the hand in signs that are lexically specified for a neutral space location has been shown to vary. Coulter (1993) found that in the realisation of lists of number signs one to five in ASL, the location is realised higher for stressed items and lower for the initial and final items. In an experimental study of ASL, Mauk, Lindblom, and Meier (2008) found that the height of the hand in the realisation of neutral space locations in ASL is raised under the influence of a high location of the hand in the preceding and following sign. The same has been shown for NGT (Ormel/Crasborn/ van der Kooij 2012). For signs located on the body, Tyrone and Mauk (2008) found the reverse effect as well: under the influence of a lower location in the preceding or following sign, a target sign assumes a lower location. These raising and lowering effects in the last two studies are argued to be an instance of coarticulation in sign languages. Similar to coarticulation in spoken language, the strength of the effect is gradual and sensitive to the rate of speaking or signing. It is thus not categorical phonological assimilation that leads to the visible difference in phonetic location, but a case of phonetic variation. This analysis is supported by the fact that the degree of experimentally elicited differences in hand height varies across signers (Tyrone/Mauk 2008).
2. Phonetics
5.4. Handshape Similar coarticulation effects for the realisation of handshapes have been described by Jerde, Soechting, and Flanders (2003) for the articulation of fingerspelling (see also Wilcox 1992). They found both progressive and anticipatory influences of fingerspelled letters on each other in ASL; both dissimilation and assimilation were found. Cheek (2001) found that similar assimilation processes also occur in the articulation of handshapes in regular lexical items in ASL. For example, the extension of the little finger needed for the articulation of the syllable > prosodic word > phonological phrase > intonational phrase > phonological utterance
There is clearly a correspondence between prosodic and syntactic constituents like syntactic phrases such as noun phrases (corresponding roughly to phonological phrases) and clauses (corresponding roughly to intonational phrases), and some theories propose that phonological and intonational phrases are projected from syntactic constituents (Selkirk 1984, 1995; Nespor/Vogel 1986). In one possible prosodic rendering of the sentence shown in example (2), adapted from a sentence in Nespor and Vogel (1986), the parenthetical sentence it is said forms its own intonational phrase constituent (labeled with an ‘I’ subscript), resulting in a sentence with two major breaks separating three intonational phrases, (a) The giant panda, (b) it is said, and (c) eats only bamboo in its natural habitat. The last intonational phrase is divided into two less salient but still discrete constituents called phonological phrases (labeled with a ‘P’ subscript), eats only bamboo (a verb phrase), and in its natural habitat (a prepositional
57
58
I. Phonetics, phonology, and prosody phrase). In this example, each prosodic constituent corresponds to a syntactic constituent. (2)
[[The giant panda]P]I [[it is said]P]I [[eats only bamboo]P] [in its natural habitat]P]I
Prosodic phrasing can vary and undergo restructuring, depending on such factors as rate of speech, size of constituent (Nespor/Vogel 1986), and semantic reasons related to interpretation (Gussenhoven 2004). For these and other reasons, prosodic and syntactic constituents are not always isomorphic ⫺ they don’t always match up (Bolinger 1989). Example (3) shows the syntactic constituency of part of the children’s story The House that Jack Built, and (4) shows that the prosodic constituent structure is different. (3)
syntactic constituents: This is [the cat that ate [the rat that ate [the cheese…
(4)
prosodic constituents: This is the cat [that ate the rat [that ate the cheese…
We see such mismatches at the level of the word as well. In the sentence, Robbie’s been getting on my nerves, Robbie’s is one prosodic word (also called a phonological word) organized around a single main word stress, but two morphosyntactic words, Robbie and is. The fact that prosody and (morpho)syntax are not isomorphic motivates the claim that prosody is a separate component of the grammar. Specific arguments against subsuming particular intonational markers within the syntactic component are offered in Sandler and Lillo-Martin (2006) and further developed in Sandler (2011). There is evidence in the literature for the integrity of each of the constituents in the hierarchy. Apart from phonetic cues associated with them, certain phonological rules require particular constituents as their domain. We will look at only one example of a rule of this sort in spoken language, at the level of the phonological phrase constituent (also called the intermediate phrase). The boundary of this constituent may be marked phonetically by timing cues such as added duration, sometimes a brief pause, and a boundary tone. The example, French liaison, occurs within phonological phrases but not across phonological phrase boundaries (Selkirk 1984; Nespor/Vogel 1986). The [s] in les and the [t] in sont are pronounced when followed by a word beginning with a vowel in the same phonological phrase (indicated by carats in (5)), but the [s] in allés is not pronounced, though followed by a word consisting of a vowel, because it is blocked by a phonological phrase boundary (indicated by a double slash). (5)
[Les^enfants]P [sont^allés]P // à l’école. ‘The children went to school.’
[French]
By respecting the phonological phrase boundary, such processes contribute to the temporal patterns of speech, and provide evidence for the existence of the prosodic category ‘phonological phrase’ within the prosodic hierarchy. Other rules respect prosodic constituent boundaries at different levels of the hierarchy, such as the intonational phrase or the phonological utterance (Nespor/Vogel 1986). Some clarification of the role that such processes take in our understanding of prosody is called for. The prosodic constituents are determined on the basis of their syntactic and/or semantic coherence together with the phonetic marking typically
4. Visual prosody found at the relevant level of structure. Certain postlexical phonological processes, such as liaison and assimilations across word boundaries, may apply within a domain so determined. That is, their application is restricted by the domain boundary ⫺ they do not cross the boundary. Such processes, which may be optional, are not treated as markers of the boundary ⫺ it is phonetic cues such as phrase-final lengthening and unmarked prominence patterns that have that role. Rather, the spreading/assimilation rules are seen as providing further evidence for the existence of the boundaries, which themselves are determined on independent grounds. In sum, prosodic constituents are related to syntactic ones but are not always coextensive with them; they are marked by particular phonetic cues; and their boundaries may form the domain of phonological rules, such as assimilation (external sandhi).
2.2. Prosodic constituents in sign language Much has been written about the sign language syllable; suffice it to say that there is such a thing, and that it is characterized by a single movement or more than one type of movement occurring simultaneously (Coulter 1978; Liddell/Johnson 1989; Sandler 1989, 2012; Brentari 1990, 1998; Perlmutter 1992; Wilbur 1993, 2011 and chapter 3, Phonology). This movement can be a movement of the hand from one place to another, movement of the fingers, movement at the wrist, or some simultaneous combination of these. The words of sign language are typically monosyllabic (see Sandler/LilloMartin 2006, chapter 14 and references cited there). However, the word and the syllable are distinguishable; novel compounds are disyllabic words, for example. But when two words are joined, through lexicalization of compounds or cliticization, they may reduce to the optimal monosyllabic form (Sandler 1993, 1999). In other words, signs prefer to be monosyllabic. Figure 4.1 shows how Israeli SL pronouns may cliticize to preceding hosts at the ends of phrases, merging two morphosyntactic words, each a separate syllable in citation form, to a single syllable. Under this type of cliticization, called coalescence (Sandler 1999a), the non-dominant hand articulates only the monosyllabic host sign, shop, while the dominant hand simultaneously articulates the host and clitic in reduced form (shop-there), superimposed on the same syllable. This is a type of non-isomorphism between morphosyntactic and prosodic structure: two lexical words form one prosodic word. It is comparable to Robbie is / Robbie’s in English. A study of the prosodic phonology of Israeli Sign Language found evidence for phonological and intonational phrases in that language (Nespor/Sandler 1999; Sandler 1999b, 2006). Phonological phrases are identified in this treatment on syntactic and phonetic grounds. Phonetically, the final boundary of a phonological phrase is characterized by hold or reiteration of the last sign in the phrase or pause after it. An optional phonological process affecting the non-dominant hand provides evidence for the phonological phrase constituent. The process is a spreading rule, called Non-dominant Hand Spread (NHS), which may be triggered by two-handed signs. In this process, the non-dominant hand, configured and orientedas in the triggering sign, is present (though static) in the signing signal while the dominant hand signs the rest of the signs in the phrase.
59
60
I. Phonetics, phonology, and prosody
Fig. 4.1: Citation forms of shop, there, and the cliticized form shop-there
The domain of the rule is the phonological phrase: if the process occurs, the spread stops at the phonological phrase boundary, like liaison in French. Spreading of the non-dominant hand was first noted by Liddell and Johnson (1986) in their treatment of ASL compounds, and this spreading occurs in Israeli SL compounds uttered in isolation as well. However, since compounds in isolation always comprise their own phonological phrases, a simpler analysis (if ASL is like Israeli SL in this regard) is that NHS is a post-lexical phonological process whose domain is the phonological phrase. Unlike French liaison, this rule does not involve sequential segments. Rather, the spread of the non-dominant hand from the triggering two-handed sign is simultaneous with the signing of other words by the dominant hand. Figure 4.2 illustrates NHS in a sentence meaning, ‘I told him to bake a tasty cake, one for me and one for my sister’. Its division into phonological and intonational phrases is as follows: [[index1 tellhim]P [bake cake]P [tasty]P]I [[one for-me]P [one for-sister]P]I. In this sentence, the configuration and location of the non-dominant hand from the sign bake spreads to the end of the phonological phrase by remaining in the same configuration as in the source sign, bake, throughout the next sign, cake, which is a one-handed sign. The end of the phonological phrase is marked by a hold ⫺ holding the hand in position at the end of the last sign. The signs on either side of this boundary, him and tasty (not shown here) are not affected by NHS.
[bake
cake]P
Fig. 4.2: Non-dominant Hand Spread from bake to cake in the same phonological phrase
4. Visual prosody Similar but not identical spreading behavior is described in ASL (Brentari/Crossley 2002). As that study was based on somewhat different definitions and assumptions, and used different methodology from the one described here, the results are difficult to compare at this point. However, like the Nespor and Sandler study, the ASL study does show spreading of the non-dominant hand beyond the domain of compound words. Explorations of the role of the non-dominant hand in prosody are found in Sandler (2006, 2012) and in Crasborn (2011). The next constituent in the hierarchy is the intonational phrase, marked by a salient break that delineates certain syntactically coherent elements, such as (fronted) topics, extraposed elements, non-restrictive relative clauses, the two clauses of conditional sentences, and parentheticals (Nespor/Vogel 1986). This prosodic constituent achieves its salience from a number of phonetic cues (on ASL, see Wilbur 2000 and references cited there). In Israeli SL, in addition to the phonetic cues contributed by the boundary of the nested phonological phrase, intonational phrase boundaries are marked by change in the position of the head or body, and a change across the board in all elements of facial expression. An example is provided in section 3, Figures 4.3 and 4.4. The juncture between intonational phrases in both ASL (Baker/Padden 1978; Wilbur 1994) and Israeli SL (Nespor/Sandler 1999) is often punctuated by an eyeblink. The whole body participates in the phonetic realization of sign language prosody. In her study of early and late learners of Swiss German Sign Language (SGSL), BoyesBraem (1999) found that both groups tend to evenly match the temporal duration of two related constituents, while only the early learners produce rhythmic body sways which mark larger chunks of discourse of particular types. The particular characteristics and appearance of this body sway may be specific to SGSL. A recent study of prosody in BSL also documents a higher level of prosodic structure above the intonational phrase, and tests its perception experimentally (Fenlon 2010). The validity of hierarchically organized prosodic constituents in a sign language is lent credence by an early study of pauses in ASL (Grosjean/Lane 1977). The researchers found highly significant differences in the length of pauses (for them, pauses are holds in final position), depending on the level of the constituents separated by them: between sentences > between conjoined clauses > between NPs and VPs > and within NPs or VPs.
3. Intonation The intonational phrase is so named because it is the domain of the most salient pitch excursions of spoken language intonation. Let us see what this means in spoken language, and examine more closely its sign language equivalent: the intonation of the face.
3.1. Intonation in spoken language Intonation can express a rich and subtle mélange of meanings in our utterances. Nuances of meaning such as additive, selective, routine, vocative, scathing, and many others
61
62
I. Phonetics, phonology, and prosody have been attributed to particular pitch contours or tunes, and the timing of these contours can also influence the interpretation (Gussenhoven 1984, 2004). Example (6) demonstrates how different intonational patterns can yield different interpretations of a sentence (from Pierrehumbert/Hirschberg 1990). In the notation used in this example, H and L stand for high and low tones, the asterisk means that the tone is accented, and the percent symbol indicates the end of an Intonational Phrase. These examples are distinguished by two tonal contrasts: the low tone on apple in example (a) versus the high tone on apple in (b); and the intermediate high tone before the disjunction or in (b), where there is no intermediate tone in (a). (6)
a. Do you want an apple or banana cake (an apple cake or a banana cake) L* H* L L% b. Do you want an apple or banana cake (fruit or cake) H* H H* L L%
The examples illustrate a number of characteristics of prosody in spoken language. First, we see here that part of the meaning of the utterance is conveyed by the intonation and the way it aligns with the text. Second, at the phonological level, the atoms of the intonational system are just two tones, L (low) and H (high), which, accented or not accented, combine in different ways to form all the tunes of any language (Pierrehumbert 1980). Third, we see that pitch accents (asterisked) are assigned to prominent elements within a prosodic constituent, and that intonational contours, made up of several tones including a pitch accent and constituent boundary tones, like H* L L% shown here, tend to cluster at the edges of prosodic constituents. Since prosodic structure is hierarchical and constituents are nested in larger constituents, the combined tone patterns of phonological and intonational phrases produce the most salient excursions at the Intonational Phrase boundary. Pierrehumbert and Hirschberg argue that intonation is componentially structured ⫺ that particular meanings or pragmatic functions are associated with individual tones, and putting them together produces a combined meaning (see also Hayes/Lahiri 1991). We will see that this componentiality characterizes sign language intonation as well. Apart from favoring alignment with prosodic over syntactic constituents, a different kind of non-isomorphism between syntax and prosody is revealed by intonation. While certain tunes are typically identified with particular syntactic structures, pragmatic context such as shared knowledge, expectation, or uncertainty often result in an atypical intonational tune. The declarative, You’re going to Poughkeepsie, can get a questioning, incredulous intonation, You’re going to Poughkeepsie?(!) if the announcement of such a journey was unexpected. The reverse is also possible, as in rhetorical questions, which have the syntactic form of questions but declarative intonation. Intonation, then, expresses meaning, often determined by pragmatics. It can convey illocutionary force (marking declarative, interrogative, or vocative expressions, for example), and other discourse meanings like shared or expected information. Intonation can also mark emotional affect, in a system that has been termed paralinguistic (Ladd 1996).
3.2. Intonation in sign language The idea that facial expression and certain other non-manual markers function in sign language like intonation in spoken languages has been in the air for a long time (e.g.,
4. Visual prosody Baker/Padden 1978; Reilly/McIntire/Bellugi 1990). On this intonation view, the facial pattern occurring on an utterance is predicted by semantic and pragmatic factors such as illocutionary force and other discourse relevant markers and relations, to which we will return shortly. Other researchers, following Liddell’s (1980) early work on syntax, treat these markers as explicitly syntactic elements that necessarily occur on structures defined syntactically (and not pragmatically or semantically), structures such as yes-no questions, wh-questions, topics, relative clauses (e.g., Petronio/Lillo-Martin 1997; Neidle et al. 2000), and even non-wh A-bar positions (Wilbur/Patschke 1999). There is a tension between these two possibilities that has only recently begun to be addressed. The two views can be evaluated by investigating whether it is syntactic structure that makes the best predictions about the specification and distribution of the relevant markers, or whether they are best predicted by pragmatic/semantic factors. Proponents of the latter view argue that particular markers, such as furrowed brows on wh-questions, cannot be considered part of the syntactic component in sign languages. Here, only the pragmatic/semantic view is elaborated. See Sandler and Lillo-Martin (2006, chapters 15 and 23) and Sandler (2011b) for detailed discussion of the two perspectives, and Wilbur (2009) for an opposing view. The motivations for viewing facial expression in particular as comparable to intonation in spoken language are shown in (7): (7)
Facial expression as intonation (a) It fulfills many of the same pragmatic functions as vocal intonation, such as cuing different types of questions, continuation from one constituent to another, and shared information. (b) It is temporally aligned with prosodic constituents, in particular with intonational phrases. (c) It can be dissociated from syntactic properties of the text.
In their paper about the acquisition of conditional sentences in ASL, Reilly, McIntire, and Bellugi (1990) explain that the following string has two possible meanings, disambiguated by particular non-manual markers: you insult jane, george angry. With neutral non-manuals, it means ‘You insulted Jane and George got angry’. But the string has the conditional meaning ‘If you insult Jane, George will be angry’ when the first clause is characterized by the following markers: raised brows and head tilt throughout the clause, with head thrust at its close and blink at the juncture between the two clauses. There is an optional sign for if in ASL, but in this string, only prosody marks the conditional. It is not unusual for conditionals to be marked by intonation alone even in spoken languages. While English conditionals tend to have if in the first clause, conditionals may be expressed syntactically as coordinated clauses (with and) in that language ⫺ You walk out that door now and we’re through ⫺ or with no syntactic clue at all and only intonation ⫺ He overcooks the steak, he’s finished in this restaurant. The description by Reilly and colleagues clearly brings together the elements of prosody by describing the facial expression and head position over the ‘if’ clause, as well as the prosodic markers at the boundary between the two phrases. The facial expression here is raised brows, compatible with Liddell’s (1980) observation that markers of constituents such as these occur on the upper face, which he associates with particular types of syntactic constituents. He distinguished these from articulations of
63
64
I. Phonetics, phonology, and prosody the lower face, which have adverbial or adjectival meanings, such as ‘with relaxation and enjoyment’, to which we return in section 5.1. The Israeli Sign Language prosody study investigates the temporal alignment of intonational articulations with the temporal and other markers that set off prosodic constituents. As explained in section 2.2. and illustrated in Figures 4.4 and 4.5 below, in the sentences elicited for that study, all face articulations typically change at the boundary between intonational phrases, and a change in head or body position also occurs there. There is a notable difference between the two modalities in the temporal distribution of intonation. Unlike intonational tunes of spoken language, which occur in a sequence on individual syllables of stressed words and at prosodic constituent boundaries, the facial intonation markers of sign language co-occur simultaneously and typically span the entire prosodic constituent. The commonality between the two modalities is this: in both, the most salient intonational arrays are aligned with prosodic boundaries. Liddell’s early work on non-manuals described configurations involving certain articulations, such as brow raise and head tilt, in a variety of different sentence types, as noted above. Is it a coincidence that the same individual articulations show up in different configurations? Later studies show that it is not. In an ASL study, forward head or body leans are found to denote inclusion/involvement and affirmation, while leans backward signify exclusion/non-involvement and negation (Wilbur/Patschke 1998). In Israeli SL, the meanings of individual facial expressions are shown to combine to create more complex expressions with complex meanings. For example, a combination of the raised brows of yes/no questions and the squint of ‘shared information’ is found on yes/no questions about shared information, such as Have you seen that movie we were talking about? (Nespor/Sandler 1999). Similarly, the furrowed brow of wh-questions combines with the shared information squint in wh-questions about shared information, such as Where is that apartment we saw together? (Sandler 1999b, 2003). Each of the components, furrowed brow, brow raise, and squint, pictured in Figure 4.3, contributes its own meaning to the complex whole in a componential system (cf. also chapter 14 on sentence types, chapter 15 on negation and chapter 21 on information structure). A semantic/pragmatic explanation for facts such as these, one that links the meanings or pragmatic intents of different constituents characterized by a particular facial
Fig. 4.3: Three common intonational facial elements: (a) furrowed brow (from a typical wh-question), (b) brow raise (from a typical yes/no question), and (c) squint (from a typical ‘shared information’ context).
4. Visual prosody expression, was first proposed by Coulter (1979). This line of reasoning is developed in detail for two Israeli SL intonational articulations, brow raise and squint (Dachkovsky 2005, 2008; Dachkovsky/Sandler 2009). Brow raise conveys a general meaning of dependency and/or continuation, much like high tone in spoken language. In questions, the continuation marked by brow raise leads to the answer, to be contributed by the addressee. In conditionals, the continuation marked by brow raise leads from the if clause to the consequent clause. Brow raise characterizes both yes/no questions and conditionals in many sign languages. The facial action squint, common in Israeli SL but not widely reported in other sign languages so far, instructs the interlocutor to retrieve information that is shared but not readily accessible. It occurs on topics, relative clauses, and other structures. Put together with a brow raise in conditionals, the squint conveys a meaning of an outcome that is not readily accessible because it is not realized ⫺ a counterfactual conditional. The occurrence of the combined expression, brow raise and squint, is reliable in Israeli SL counterfactual conditionals (95% of the 39 counterfactual conditionals elicited from five native Israeli SL subjects in the Dachkovsky study). An example is, If the goalkeeper had caught the ball, they would have won the game. This sentence is divided into two intonational phrases. Figure 4.4 shows the whole utterance, and Figure 4.5 is a close-up, showing the change of facial expression and head position on the last sign of the first intonational phrase and the first sign of the second. Crucially, the release or change of face and body actions occurs at the phrase boundary.
Fig. 4.4: Counterfactual conditional sentence with partial coding (from Dachkovsky/Sandler 2009)
A neutral conditional facial expression, extracted from the sentence, If he invites me to the party, I will go, characterized by brow raise without squint, is shown in Figure 4.6 for comparison. In addition to non-isomorphism between morphosyntactic and prosodic constituency, demonstrated for Israeli SL in section 2.2, non-isomorphism is also found between syntactic structure and intonational meaning. For example, while wh-questions typically occur with the furrowed brow facial expression shown in Figure 4.3, present
65
66
I. Phonetics, phonology, and prosody
Fig. 4.5: Intonational phrase boundary
Fig. 4.6: Neutral conditional facial expression
in 92% of the wh-questions in the Dachkovsky (2005) study, other expressions are also possible. Figure 4.7 shows a wh-question uttered in the following context: You went to a party in Haifa and saw your friend Yoni there. If you had known he was going, you would have asked for a ride. The question you ask him is, “Why didn’t you tell me you were going to the party?” ⫺ syntactically a wh-question. As in spoken language, intonation can convey something about the (pragmatic) assumptions and the (emotional) attitude of the speaker/signer that cannot be predicted by the syntax. Here we do not see the furrowed brow (Figure 4.3a) typical of wh-questions. Instead, we see an expression that may be attributed to affect. As in spoken intonation (Ladd 1996), paralinguistic and linguistic intonation are cued by the same articulators, and distinguishing them is not always easy. See de Vos, van der Kooij, and Crasborn (2009) for a discussion of the interaction between affective and linguistic intonation in Sign Language of the Netherlands (NGT). In sum, facial expression serves the semantic/pragmatic functions of intonation in sign language; it is componentially structured; and the temporal distribution of linguistic facial intonation is determined by prosodic constituency.
4. Visual prosody
Fig. 4.7: Atypical facial expression on a wh-question
4. Prominence In addition to timing and intonation, prominence or stress is important to the interpretation of utterances. The sentences in (9) are distinguished only by where the prominence is placed: (9)
a. Ron called Jeff an intellectual, and then he insulted him. b. Ron called Jeff an intellectual, and then he insulted him.
It is the pattern of prominence that tells us whether calling someone an intellectual is an insult and it also tells us who insulted whom.
4.1. Prominence in spoken language Typically, languages have default prominence patterns that place prominence either toward the beginning or toward the end of prosodic constituents, depending on the word order properties of the language, according to Nespor and Vogel (1982). In English, a head-complement language, the prominence is normally at the end: John gave a gift to Mary. English is a ‘plastic’ intonation language (Vallduví 1992), allowing prominence to be placed on different constituents if they are focused or stressed, as (9) showed. The stress placement on each of the following sentences indicates that each is an answer to a different question: John gave a gift to Mary (either default or with Mary focused), John gave a gift to Mary, John gave a gift to Mary, or John gave a gift to Mary. The stress system of other languages, such as Catalan, is not plastic; instead of roaming freely, the focused words move into the prominent position of the phrase, which remains constant.
4.2. Prominence in sign language How do sign languages mark prominence? In the Israeli SL prosody study, the manual cues of pause, hold, or reiteration and increased duration and size (displacement) con-
67
68
I. Phonetics, phonology, and prosody sistently fall on the final sign in the intonational phrases of isolated sentences, and the authors interpret these as markers of the default phrase-final prominence in Israeli SL. As Israeli SL appears to be a head-complement language, this prominence pattern is the predicted one. A study of ASL using 3-D motion detection technology for measuring manual behavior determined that default prominence in ASL also falls at the ends of prosodic constituents (Wilbur 1999). That study attempted to tease apart the effects of phrase position from those of stress, and revealed that increased duration, peak velocity, and displacement are found in final position, but that peak velocity alone correlates with stress in that language. The author tried to dissociate stress from phrase-final prominence using some admittedly unusual (though not ungrammatical) elicited sentences. When stress was manipulated away from final position in this way, measurements indicated that added duration still always occurred only phrase-finally, suggesting that duration is a function of phrase position and not of stress. The author reports further that ASL is a non-plastic intonation language, in which prominence does not tend to move to focus particular parts of an utterance; instead the words or phrases typically move into the final prominent position of the phrase or utterance. Certain non-manual cues also play a role in marking prominence. Contrastive stress in ASL is marked by body leans (Wilbur/Patschke 1998). In their study of focus in NGT, van der Kooij, Crasborn, and Emmerik (2006) also found that signers use leans (of the head, the body, or both) to mark contrastive stress, but that there is a tendency to lean sideways rather than backward and forward in that language. The authors point out that not only notions such as involvement and negation (Wilbur/Patschke 1998) affect the direction of body leans in NGT. Pragmatic aspects of inter-signer interaction, such as the direction in which the interlocutor leaned in the preceding utterance, must also be taken into account in interpreting leans. Signers tend to lean the opposite way from that of their addressee in the context-framing utterance, regardless of the semantic content of the utterance, i.e., positive or negative. Just as pragmatic considerations underlie prosodic marking of information that is old, new, or shared among interlocutors, other pragmatic factors such as the inter-signer interaction described in the NGT study must surely play a role in sign language prosody in general.
5. Residual issues Two additional issues naturally emerge from this discussion, raised here both for completeness and as context for future research. The first is the issue of the inventory of phonetic cues in the sign language prosodic system, and the second is the role of modality on prosody. Since the physical system of sign language transmission is so different from that of spoken language, the first problem for sign language researchers is to determine which phonetic cues are prosodic. This chapter attempts to make a clear distinction between the terms, non-manuals and prosodic markers. The two are not synonymous. For one thing, the hands are very much involved in prosody, as we have seen. For another, not all non-manual markers are prosodic. Just as manual articulations encode many grammatical functions, so too do non-manual articulations. This means that neither ‘manuals’ nor ‘non-manuals’ constitutes a natural class in the gram-
4. Visual prosody mar. Discussion of how the articulatory space is divided up among different linguistic systems in 5.1 underscores the physical differences between the channels of transmission for prosody in spoken and sign languages, which brings us to the second issue, in 5.2, the influence of modality on the form and organization of prosody.
5.1. Not all non-manual articulations are prosodic Physical properties alone cannot distinguish prosodic units from other kinds of elements in language. In spoken language, duration marks prosodic constituent boundaries but can also make a phonemic contrast in some languages. Tones are the stuff of which intonation is made, but in many languages, tone is also a contrastive lexical feature. Word level stress is different from phrase level prominence. In order to determine to which component of the grammar a given articulation belongs, we must look to function and distribution. In sign language too, activation of the same articulator may serve a variety of grammatical functions (see also Pfau/Quer (2010) for discussion of the roles of non-manual markers). Not all manual actions are lexical, and not all non-manual articulations are prosodic. A useful working assumption is that a cue is prosodic if it corresponds to the functions of prosodic cues known from spoken language, sketched briefly in sections 2.1, 3.1, and 4.1. A further test is whether the distribution of the cue in question is determined by the domain of prosodic constituents, where these can be distinguished from morpho-syntactic constituents. It is clear that the main function of the hands in sign languages is to articulate the lexical content, to pronounce the words. But we have seen here that as articulators they also participate in the prosodic system, by modulating their behavior in accordance with the temporal and stress patterns of utterances. Different phonological processes involving the non-dominant hand observe prosodic constituent boundaries at the prosodic word and phonological phrase level, though in the lexicon, the non-dominant hand is simply part of the phonological specification of a sign. The head and body perform the prosodic functions of delineating constituency and marking prominence, but they are also active in the syntax, in their role as a logophoric pronoun expressing point of view (Lillo-Martin 1995). Similarly, while articulations that are not manual ⫺ movements of the face, head, and body ⫺ often play an important role in prosody, not all non-manual articulations are prosodic. We will first consider two types of facial action, one of which may not be prosodic at all, while the other, though prosodic, is not part of the grammar; it is paralinguistic. We then turn to actions of the head and eyes. Actions of the lower face convey adverbial or adjectival meaning in ASL (Liddell 1980), Israeli SL (Meir/Sandler 2008), and other sign languages. As a group, these may differ semantically and semiotically from the actions of the upper face attributed to intonation, and the way in which they align temporally with syntactic or prosodic constituents has yet to be investigated. A range of other articulations are made by the mouth. Borrowed mouthing from spoken language that accompanies signing may respect the boundaries of the prosodic word in Israeli SL (Sandler 1999), similarly to the way in which spread of the non-dominant hand respects phonological phrase boundaries. But we do not yet have a clear picture of the range and distribution of mouth
69
70
I. Phonetics, phonology, and prosody action with respect to prosodic constituents of sign languages generally (see BoyesBraem/Sutton-Spence 2001). Another type of facial action is affective or emotional facial expression. This system uses (some of) the same articulators as linguistic facial expression, but has different properties in terms of temporal distribution, number of articulators involved, and pragmatic function (Baker-Shenk 1983; Dachkovsky 2005, 2010; de Vos/van der Kooij/Crasborn 2009). It appears that some intonational facial configurations are affective, and not part of the linguistic grammar, as is the case in spoken language (Ladd 1996). Negative headshake is an example of a specific non-manual action whose role in the grammar is not yet fully determined (cf. chapter 15 on negation). Sometimes attributed to prosody or intonation, this element is at least sometimes a non-linguistic gesture, as it is for hearing speakers in the ambient culture. It may occur without any signs, but it may also negate an utterance without a negative manual sign. A comparative study of negation in German Sign Language and Catalan Sign Language indicates that the distribution of the headshake varies from sign language to sign language (Pfau/ Quer 2007). The authors assume that the signal is part of the syntax. It is not yet clear whether this signal has prosodic properties ⫺ or even whether it belongs to the same grammatical component in different sign languages. Eye gaze is also non-manual, but may not participate in the prosodic system. In Israeli SL, we have found that this element does not perform any expressly prosodic function, nor does it line up reliably with prosodic constituency. Researchers have argued that gaze may perform non-linguistic functions such as turn-taking (Baker 1977) or pointing (Sandler/Lillo-Martin 2006) and/or syntactic functions related to agreement (see Neidle et al. (2000) and Thompson et al. (2006) for opposing views of gaze as agreement, the latter an eye tracking study). The eyebrows and the upper and lower eyelids participate in prosody, but the eyeballs have something else in mind.
5.2. Prosody in sign and spoken language Sign language has more articulators to work with, and it seems to utilize all of them in prosody. The brows, upper and lower eyelids, head and body position, timing and prominence properties conveyed by the hands, and even the dual articulator, the nondominant hand, all participate. The availability of many independent articulators conspires with the capacities of the visual system to create a signal with a good deal of simultaneous information. Prosody in spoken language also involves a more simultaneous layering of information than other aspects of language in that modality (hence the term ‘suprasegmental’), yet it is still quite different in physical organization than that of sign language. Pitch contours of spoken intonation are transmitted by the same conduit as the words of the text ⫺ vocal cord vibration. In sign language, intonation is carried by articulations of the upper face while the text is conveyed by the hands. In addition, the upper face has different articulators which may also move independently. This independence of the articulators has one obvious result: different intonational ‘tones’ (such as brow raise and squint) can co-occur with one another in a simultaneous array together with the whole constituent with which it is associated. Intonation in spoken language is conveyed by a linear sequence of tones, most of which congregate at the
4. Visual prosody boundaries of intonational phrases. Do differences such as these result in greater flexibility and range of expression in sign language prosody? Do they influence the grammatical organization of the system? These are intriguing questions for future research. Also of interest is the use of facial signals by speakers, for example, of raised brows to mark prominence (Swerts/Krahmer 2009), and of raised brows and head tilt to accompany questions (Srinivastan/Massaro 2003). In fact, there is experimental evidence that the upper face has a special role in the visual prosody accompanying spoken language, as it does in sign language (Swerts/Krahmer 2008). In sign languages, intonation and prosodic constituency are systematically marked, and constitute a linguistic system. Since it is possible to transmit spoken language effectively without visual cues (on the telephone, to blind people, or in the dark), it is reasonable to surmise that the visual prosody of spoken language is augmentative and paralinguistic. However, empirical comparison of the patterning and role of visual prosody in sign and spoken language has not yet been attempted.
6. Conclusion Sign languages have rich prosodic systems, exploiting phonetic possibilities afforded by their articulators: the face, the hands, the head, and the torso. Each of these articulators participates in other grammatical components, and their prosodic status is identified on semantic/pragmatic grounds as well as by the nature of the constituents with which they are temporally aligned. Utterances are divided into constituents, marked mainly by the action of the hands, and are modulated by intonation-like articulations, expressed mainly by the face. The prosodic system is nonisomorphic with syntax, although it interacts with that level of structure, as it does with the phonological level, in the form of rules such as Non-dominant Hand Spread. The field is young, and much territory is uncharted. Some controversies are not yet resolved, many of the facts are not yet known or confirmed, and the prosody of many sign languages has not been studied at all. Similiarly, all is far from settled in spoken language research on prosody. Interesting theoretical issues that are the subject matter of current prosodic research are waiting to be addressed in sign language inquiries too ⫺ issues related to the nature and organization of the prosodic system, as well as its interaction with syntax and other components of the grammar. A key question for future research follows from the non-trivial differences in the physical form of prosody in the spoken and sign modalities: which properties of prosody are truly universal? Acknowledgements: Research on prosody in Israeli Sign Language was supported by grants from the Israeli Science Foundation. I also thank reviewers for helpfull comments on this chapter.
7. Literature Baker, Charlotte 1977 Regulators and Turn-taking in American Sign Language Discourse. In: Friedman, Lynn (ed.), On the Other Hand: New Perspectives on American Sign Language. New York: Academic Press, 215⫺236.
71
72
I. Phonetics, phonology, and prosody Baker, Charlotte/Padden, Carol A. 1978 Focusing on the Non-manual Components of ASL. In: Siple, Patricia (ed.), Understanding Language through Sign Language Research. New York: Academic Press, 27⫺57. Baker-Shenk, Charlotte 1983 A Micro-analysis of the Non-manual Components of American Sign Language. PhD Dissertation, University of California, Berkeley. Beckman, Mary E./Pierrehumbert, Janet B. 1986 Intonational Structure in English and Japanese. In: Phonology Yearbook 3, 255⫺310. Bolinger, Dwight 1989 Intonation and its Uses: Melody in Grammar and Discourse. Stanford, CA: Stanford University Press. Boyes Braem, Penny 1999 Rhythmic Temporal Patterns in the Signing of Deaf Early and Late Learners of German Swiss Sign Language. In: Sandler, Wendy (ed.), Language and Speech (Special Issue on Prosody in Spoken and Signed Languages 42(2/3)), 177⫺208. Boyes Braem, Penny/Sutton-Spence, Rachel (eds.) 2001 The Hands are the Head of the Mouth: The Mouth as Articulator in Sign Languages. Hamburg: Signum. Brentari, Diane 1990 Theoretical Foundations of American Sign Language Phonology. PhD Dissertation, University of Chicago. Brentari, Diane 1998 A Prosodic Model of Sign Language Morphology. Cambridge, MA: MIT Press. Brentari, Diane/Crossley, Laurinda 2002 Prosody on the Hands and Face: Evidence from American Sign Language. In: Sign Language and Linguistics 5(2), 105⫺130. Coerts, Jane 1992 Non-manual Grammatical Markers: An Analysis of Interrogatives, Negations, and Topicalizations in Sign Language of the Netherlands. PhD Dissertation, University of Amsterdam. Coulter, Geoffrey 1978 Raised Brows and Wrinkled Noses: The Grammatical Function of Facial Expression in Relative Clauses and Related Constructions. In: Caccamise, Frank/Hicks, Doin (eds.), American Sign Language in a Bilingual, Bicultural Context. Proceedings of the Second National Symposium on Sign Language Research and Teaching. Silver Spring: NAD, 65⫺74. Crasborn, Onno 2011 The Nondominant Hand. In: Oostendorp, Marc van/Ewen, Colin/Hume, Elizabeth/ Rice, Keren (eds.), The Blackwell Companion to Phonology. 5 Volumes. Oxford: Blackwell, 223⫺240. Dachkovsky, Svetlana 2005 Facial Expression as Intonation in ISL: The Case of Conditionals. MA Thesis. University of Haifa. Dachkovsky, Svetlana 2008 Facial Expression as Intonation in Israeli Sign Language: The Case of Neutral and Counterfactual Conditionals. In: Quer, Josep (ed.), Signs of the Time. Selected Papers from TISLR 2004. Hamburg: Signum, 61⫺82. Dachkovsky, Svetlana 2010 Affective and Grammatical Intonation in Israeli Sign Language. Manuscript University of Haifa. Dachkovsky, Svetlana/Sandler, Wendy 2009 Visual Intonation in the Prosody of a Sign Language. In: Language and Speech 52(2/ 3), 287⫺314.
4. Visual prosody Deuchar, Margaret 1984 British Sign Language. London: Routledge & Kegan Paul. Engberg-Pedersen, Elisabeth 1990 Pragmatics of Non-manual Behaviour in Danish Sign Language. In: Edmondson, William/Karlsson, Fred (eds.), SLR ’87: Papers from the Fourth International Symposium on Sign Language Research. Hamburg: Signum, 121⫺128. Fenlon, Jordan 2010 Seeing Sentence Boundaries: The Production and Perception of Visual Markers Signaling Boundaries in Sign Languages. PhD Dissertation, University College London. Fox, Anthony 2000 Prosodic Features and Prosodic Structure: The Phonology of Suprasegmentals. Oxford: Oxford University Press. Goldstein, Louis/Whalen, Douglas/Best, Catherine (eds.) 2006 Papers in Laboratory Phonology VIII. Berlin: Mouton de Gruyter. Grosjean, Francois/Lane, Harlan 1977 The Perception of Rate in Spoken and Sign Languages. In: Perception and Psychophysics 22, 408⫺413. Gussenhoven, Carlos 1984 On the Grammar and Semantics of Sentence Accent. Dordrecht: Foris. Gussenhoven, Carlos 2004 The Phonology of Tone and Intonation. Cambridge: Cambridge University Press. Hayes, Bruce/Lahiri, Aditi 1991 Bengali Intonational Phonology. In: Natural Language and Linguistic Theory 9, 47⫺96. Janzen, Terry/Shaffer, Barbara 2002 Gesture as the Substrate in the Process of ASL Grammaticization. In: Meier, Richard/ Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 199⫺223. Johnston, Trevor 1992 The Realization of the Linguistic Metafunctions in a Sign Language. In: Language Sciences 14(4), 317⫺353. Kooij, Els van der/Crasborn, Onno/Emmerik, Wim 2006 Explaining Prosodic Body Leans in Sign Language of the Netherlands: Pragmatics Required. In: Journal of Pragmatics 38, 1598⫺1614. Ladd, Robert 1996 Intonational Phonology. Cambridge: Cambridge University Press. Liddell, Scott K. 1978 Non-manual Signals and Relative Clauses in American Sign Language. In: Siple, Patricia (ed.), Understanding Language through Sign Language Research. New York: Academic Press, 59⫺90. Liddell, Scott K. 1980 American Sign Language Syntax. The Hague: Mouton. Liddell, Scott K./Johnson, Robert E. 1986 American Sign Language Compound Formation Processes, Lexicalization, and Phonological Remnants. In: Natural Language and Linguistic Theory 8, 445⫺513. Lillo-Martin, Diane 1995 The Point of View Predicate in American Sign Language. In: Emmorey, Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Erlbaum, 155⫺170. Meir, Irit/Sandler, Wendy 2008 A Language in Space: The Story of Israeli Sign Language. Mahwah, NJ: Erlbaum. Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert G. 2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Structure. Cambridge, MA: MIT Press.
73
74
I. Phonetics, phonology, and prosody Nespor, Marina/Vogel, Irene 1982 Prosodic Domains of External Sandhi Rules. In: Hulst, Harry van der/Smith, Norval (eds.), The Structure of Phonological Representations. Dordrecht: Foris, 225⫺255. Nespor, Marina/Vogel, Irene 1986 Prosodic Phonology. Dordrecht: Foris. Nespor, Marina/Sandler, Wendy 1999 Prosody in Israeli Sign Language. In: Sandler, Wendy (ed.), Language and Speech (Special Issue on Prosody in Spoken and Signed Languages 42(2/3)), 143⫺176. Oostendorp, Marc van/Ewen, Colin/Hume, Elizabeth/Rice, Keren 2011 The Blackwell Companion to Phonology. 5 Volumes. Oxford: Blackwell. Perlmutter, David 1992 Sonority and Syllable Structure in American Sign Language. In: Linguistic Inquiry 23, 407⫺442. Petronio, Karen/Lillo-Martin, Diane 1997 Wh-movement and the Position of Spec-CP: Evidence from American Sign Language. In: Language 73(1), 18⫺57. Pfau, Roland/Quer, Josep 2007 On the Syntax of Negation and Modals in Catalan Sign Language and German Sign Language. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 129⫺162. Pfau, Roland/Quer, Josep 2010 Non-manuals: Their Prosodic and Grammatical Roles. In: Brentari, Diane (ed.), Sign Languages. Cambridge: Cambridge University Press, 381⫺402. Pierrehumbert, Janet/Hirschberg, Julia 1990 The Meaning of Intonational Contours in Interpretation of Discourse. In: Cohen, Philip R./Morgan, Jerry/Pollack, Martha E. (eds.), Intentions in Communication. Cambridge, MA: MIT Press, 271⫺311. Pierrehumbert, Janet 1980 The Phonology and Phonetics of English Intonation. PhD Dissertation, MIT. Reilly, Judy/McIntire, Marina/Bellugi, Ursula 1990 The Acquisition of Conditionals in American Sign Language: Grammaticized Facial Expressions. In: Applied Psycholinguistics 11, 369⫺392. Sandler, Wendy 1989 Phonological Representation of the Sign: Linearity and Non-linearity in American Sign Language. Dordrecht: Foris. Sandler, Wendy 1993 A Sonority Cycle in American Sign Language. In: Phonology 10, 243⫺279. Sandler, Wendy 1999a Cliticization and Prosodic Words in a Sign Language. In: Hall, Tracy/Kleinhenz, Ursula (eds.), Studies on the Phonological Word. Amsterdam: Benjamins, 223⫺254. Sandler, Wendy 1999b The Medium and the Message: Prosodic Interpretation of Linguistic Content in Israeli Sign Language. In: Sign Language and Linguistics 2, 187⫺216. Sandler, Wendy 2006 Phonology, Phonetics, and the Non-dominant Hand. In: Goldstein, Louis/Whalen, Douglas/Best, Catherine (eds.), Papers in Laboratory Phonology VIII. Berlin: Mouton de Gruyter, 185⫺212. Sandler, Wendy 2011a The Phonology of Movement in Sign Language. In: Oostendorp, Marc van/Ewen, Colin/ Hume, Elizabeth/Rice, Keren (eds.), The Blackwell Companion to Phonology. 5 Volumes. Oxford: Blackwell, 577⫺603.
4. Visual prosody Sandler, Wendy 2011b Prosody and Syntax in Sign Languages. In: Transactions of the Philological Society 108(3), 298⫺328. Sandler, Wendy 2012 The Phonological Organization of Sign Languages. In: Language and Linguistics Compass 6(3), 162⫺182. Sandler, Wendy/Lillo-Martin, Diane 2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press. Selkirk, Elisabeth 1984 Phonology and Syntax: The Relation Between Sound and Structure. Cambridge, MA: MIT Press. Selkirk, Elisabeth 1995 Sentence Prosody: Intonation, Stress, and Phrasing. In: Goldsmith, John (ed.), The Handbook of Phonological Theory. Cambridge, MA: Blackwell, 550⫺569. Srinivasan, Ravindra/Massaro, Dominic 2003 Perceiving Prosody from the Face and Voice: Distinguishing Statements from Echoic Questions in English. In: Language and Speech 46(1), 1⫺22. Swerts, Marc/Krahmer, Emiel 2008 Facial Expression and Prosodic Prominence: Effects of Modality and Facial Area. In: Journal of Phonetics 36(2), 219⫺238. Swerts, Marc/Krahmer, Emiel 2009 Audiovisual Prosody: Introduction to the Special Issue. In: Language and Speech 52(2/ 3), 129⫺135. Thompson, Robin/Emmorey, Karen/Kluender, Robert 2006 The Relationship Between Eye Gaze and Agreement in American Sign Language: An Eye-tracking Study. In: Natural Language and Linguistic Theory 24, 571⫺604. Vallduví, Enric 1992 The Informational Component. New York: Garland. Vos, Connie de/Kooij, Els van der/Crasborn, Onno 2009 Mixed Signals. Combining Linguistic and Affective Functions of Eye Brows in Questions in Sign Language of the Netherlands. In: Language and Speech, 52(2/3), 315⫺339. Wilbur, Ronnie 1993 Syllables and Segments: Hold the Movement and Move the Holds! In: Coulter, Geoffrey R. (ed.), Current Issues in ASL Phonology. New York: Academic Press, 135⫺168. Wilbur, Ronnie 1994 Eyeblinks and ASL Phrase Structure. In: Sign Language Studies 84, 221⫺240. Wilbur, Ronnie 1999 Stress in ASL: Empirical Evidence and Linguistic Issues. In: Language and Speech 42(2/ 3), 229⫺250. Wilbur, Ronnie 2000 Phonological and Prosodic Layering of Non-manuals in American Sign Language. In: Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology in Honor of Ursula Bellugi and Edward Klima. Mahwah, NJ: Erlbaum, 215⫺244. Wilbur, Ronnie/Patschke, Cynthia 1998 Body Leans and the Marking of Contrast in American Sign Language. In: Journal of Pragmatics 30, 275⫺303. Wilbur, Ronnie/Patschke, Cynthia 1999 Syntactic Correlates of Brow Raise in ASL. In: Sign Language and Linguistics 2(1), 3⫺40. Wilbur, Ronnie 2011 The Syllable in Sign Language. In: Oostendorp, Marc van/Ewen, Colin/Hume, Elizabeth/Rice, Keren (eds.), The Blackwell Companion to Phonology. 5 Volumes. Oxford: Blackwell, 1399⫺1344.
75
76
I. Phonetics, phonology, and prosody Woll, Bencie 1981 Question Structure in British Sign Language. In: Woll, Bencie/Kyle, Jim G./Deuchar, Margaret (eds.), Perspectives on British Sign Language and Deafness. London: Croom Helm, 136⫺149.
Wendy Sandler, Haifa (Israel)
II. Morphology 5. Word classes and word formation 1. 2. 3. 4. 5. 6.
Introduction The signed word Sign language morphological processes Word classes Word formation Literature
Abstract This chapter deals with three aspects of words in sign languages: (i) the special nature of the sub-lexical elements of signed words and the consequences for the relationship between words; (ii) the classification of words into word classes; and (iii) the morphological means for creating new words in the signed modality. It is shown that although almost all of the structures and phenomena discussed here occur in spoken languages as well, the visual-spatial modality has an impact on all three aspects in that sign languages may show different preferences than spoken languages. Three central morphological operations are discussed: compounding, affixation, and reduplication. Sign languages endow these operations with flavors that are available only to manual-spatial languages: the existence of two major articulators, and their ability to move in various spatial and temporal patterns. These possibilities are exploited by sign languages, resulting in strong preference for simultaneous morphological structures in both inflectional and derivational processes.
1. Introduction Words have to perform several ‘jobs’ in a language: they provide the users of that language with means to refer to whatever concept the users want to express, be it an entity, an idea, an event, or a property. Words also have to combine with each other to allow users to convey information: to say something about something or someone. In order to fulfill the first task, there must be ways to create new words as the need arises to refer to new concepts. Regarding the second task, when combined to form larger units, words should be able to perform different roles, such as arguments, predicates, and modifiers. Different words may be specialized for particular roles, and languages may have means for creating words for specific roles. Sign languages are natural languages produced in a physical modality different from that of spoken languages. Both types of language have to perform the same communicative functions with the same expressive capabilities, yet the physical means available to each type of language vary greatly. Sign languages are produced by hands, body,
78
II. Morphology and face; they are transmitted through space, and perceived by the eyes. Spoken languages are produced by the speech organs, transmitted as sound waves, and are perceived by the ears. Might these disparities make any difference to the nature of the elements that make up each system? To their organization? To the processes they undergo? Focusing on words, we ask whether words, the relationship between words, and the means for creating new words are affected by the particular modality of the language (see also chapter 25 on language and modality). This chapter deals with three aspects of words in sign languages: the special nature of the sub-lexical elements of signed words and the consequences for the relationship between words; the classification of words into word classes; and the morphological means for creating new words in the signed modality. The modality issue runs across the entire chapter. In each section, I examine the ways in which modality affects the linguistic structures and processes described.
2. The signed word Sign languages have words, that is, conventionalized units of form-meaning correspondence, like spoken languages. These units have psychological reality for their users (Zeshan 2002). They are composed of sub-lexical units and are therefore characterized by duality of patterning (Stokoe 1960). They are characterized by specific phonological structures and are subject to certain phonological constraints (Sandler 1999; see chapter 3, Phonology). Sign language words are usually referred to as signs, and we will adopt this terminology here as well. Obviously, signs differ from words in their physical instantiation. The physical differences result in structural differences as well. Signs are much more simultaneously organized than words (Stokoe 1960), and tend to be monosyllabic (Sandler 1999). But signs differ from words in another important respect: they are much better at iconically depicting the concepts they denote (see Taub 2001 and references cited there). Sign languages make use of this capability. The lexicons of sign languages contain many more iconic and partly iconic signs than those of spoken languages, since spoken languages are limited to acoustic iconicity. Iconicity results from the nature of the sublexical elements building up a sign, which in turn has an effect on how signs are related to each other.
2.1. The nature of sub-lexical units One of the design features of human language is duality of patterning (Hockett 1960), the existence of two levels of combinatorial structure, one combining meaningless elements (phonemes) into meaningful elements, the other combining meaningful elements (morphemes and words) into larger meaningful units. Sign languages are also characterized by duality of patterning. Signs are not holistic units, but are made up of specific formational units ⫺ hand configuration, movement, and location (Stokoe 1960). However, these formational units are in many cases not devoid of meaning. Take the verb eat in Israeli Sign Language (Israeli SL) and other sign languages as
5. Word classes and word formation well, for example. The hand assumes a particular shape (G), moving toward the mouth from a location in front of it, and executes this movement twice. ‘Eat’ means “to put (food) in the mouth, chew if necessary, and swallow” (Webster’s New Word Dictionary, Third College Edition). The sign eat is iconic, since there is a regular mapping between its formational elements and components of its meaning: the G handshape corresponds to holding a solid object (food); the mouth corresponds to the mouth of the eater, the agent argument; the movement towards the mouth corresponds to putting the object into the mouth; and the double movement indicates a process. Many signs are only partially iconic: some formational elements correspond to meaning components, but not all. Other signs are arbitrary; none of their formational components can be said to correspond to a meaning component in any obvious way (though some researchers claim that no signs are completely arbitrary, and that the sign formational elements are always meaning-bearing, e.g., Tobin 2008). The lexicon of any sign language, then, consists of signs that are arbitrary and signs that are iconic to different degrees, yet all signs make use of the same formational elements. Spoken language lexicons are not that different; they also have both arbitrary and non-arbitrary words. The difference between the two types of languages is in the relative proportions of the different kinds of words. In spoken languages, non-arbitrary words are quite marginal, making it possible (and convenient) to ignore them. In sign languages non-arbitrary signs constitute a substantial part of the lexicon. Boyes Braem (1986) estimates that at least a third of the lexical items of Swiss-German Sign Language are iconic. Zeshan (2000) estimates that the percentage might be even higher (at least half of the signs) for Indopakistani Sign Language (IPSL). Iconic signs present a challenge for the traditional division between phonemes and morphemes, since the basic formational units, the phonemes of sign languages, may be meaning-bearing and not meaningless. Meaningfulness is usually regarded as the factor distinguishing phonemes from morphemes: phonemes are meaningless, while morphemes are meaningful units. Yet phonemes are also the basic building blocks of meaning bearing units in a language. But in sign languages, those basic building blocks are also meaning-bearing. Can they be regarded as morphemes, then? This would also seem problematic, since they are not composed of more basic formational elements, and the units they attach to are not words, stems, or roots, but rather other basic formational units. Johnston and Schembri (1999, 118) propose that these units function simultaneously as phonemes and morphemes, since they serve as the basic formational building blocks and at the same time as minimal meaning-bearing units. They propose the term ‘phonomorphemes’ to capture the nature of these basic elements. This dual nature of the basic formational units is even more evident in classifier constructions (see chapter 8 on classifiers).
2.2. The structure of the lexicon: sign families Leaving theoretical issues aside, the meaningfulness of the formational building blocks of signs has consequences for the organization of the sign language lexicon. Signs that share a formational element (or elements) often also share some meaning component. For example, many signs in Israeli SL that are articulated on the temple express some kind of mental activity (know, remember, learn, worry, miss, dream, day-dream); signs
79
80
II. Morphology articulated on the chest often denote feelings (love, suffer, happy, proud, pity, heartache). Many signs with a W handshape denote activities performed by the legs (jump, get-up, fall, walk, run, stroll). Fernald and Napoli (2000) enumerate groups of signs, or sign families, in American Sign Language (ASL) that share formational elements, be it location, movement, handshape, or any combination of these. They show that the phenomenon of word families is very robust in ASL, characterizing the entire lexicon. Works on other sign languages (e.g., Brennan (1990) on British Sign Language (BSL); Johnston and Schembri (1999) on Australian Sign Language (Auslan); Meir and Sandler (2008) on Israeli SL) show that this is characteristic of other languages in the signed modality. Signs in such a ‘family’ are related to each other not by inflectional or derivational means, yet they are related nonetheless. Fernald and Napoli posit a new linguistic unit, the ‘ion-morph’, a combination of one or more phonological features that, within a certain set of signs, has a specific meaning. Take, for example, the signs mother and father in ASL: they have the same movement, orientation, and handshape. They differ with respect to the location: chin for mother, forehead for father. Within this restricted set of signs, the combination of specific movement, orientation, and handshape have the meaning of ‘parent’. The chin and the forehead, in turn, are ion-morphs denoting female and male in signs expressing kinship terms, such as sister-brother, niece-nephew, grandmother-grandfather. Fernald and Napoli (2000, 41) argue that ion-morphs are relevant not only for sign languages, but for spoken languages as well. A case in point is phonosymbolism, the ability of certain sounds or combination of sounds to carry specific ‘sound images’ that go with particular semantic fields, such as fl- representing a liquid substance in motion, as in flow, flush, flood, or fluid. Yet one can find word families even in more grammatical domains. For example, most question words in English begin with wh-. The labial glide carries the interrogative meaning within a specific set of words, and it may contrast with the voiced interdental fricative in pairs like ‘then/when’ and ‘there/where’, the latter carrying the meaning of ‘definiteness’, as in the/that/this/those. The examples from both sign and spoken languages clearly show that there are ways other than inflection and derivation to relate words to one another. Whether these relations are morphological in nature is a difficult theoretical question, which can be conveniently set aside when dealing with spoken languages, since word families are less central to the structure of their lexicons. In sign languages, in contrast, they are an important characteristic of the lexicon. They may also play a role in creating new words (as suggested by Fernald and Napoli 2000), since language users may rely on existing ion-morphs when new lexical items are coined. Such cases again raise the question of whether or not derivational morphology is at play here. The special nature of the sub-lexical units in signs affects the lexicon in another respect as well. When phonemes are combined to create a sign, the meaning of the resulting unit is often componential and transparent. This means that signs in the lexicon of a sign language need be less conventionalized than words of a spoken language, since their meaning can often be computed. Johnston and Schembri (1999, 126) make a distinction between signs and lexemes, the latter having a meaning “which is (a) unpredictable and/or somewhat more specific than the sign’s componential meaning potential even when cited out of context, and or (b) quite unrelated to its componential meaning components (i.e., lexemes may have arbitrary links between form and meaning).” Lexemes, then, can be completely arbitrary, but more importantly, they are com-
5. Word classes and word formation pletely conventionalized, and can therefore be thought of as stored in the lexicon of the language. Signs, in contrast, are more productive than lexemes. They can be invented ‘on the spot’, because of the transparency of their components, and are therefore less lexicalized and less conventionalized than lexemes. A signer, for example, can invent a sign meaning ‘the three of them were walking together’ by extending three fingers and moving the hand in space. Such a sign can be understood in the appropriate context even if there is no conventional sign with that meaning in the specific sign language used by the signer. Johnston and Schembri show that signs and lexemes have different phonological, morphological, and semantic characteristics, and suggest that only lexemes should be part of the lexicon. An interesting question that arises is whether signs (as opposed to lexemes) are words, and if they are, whether they form a separate word class. One specific phenomenon that has been referred to in this context is the issue of classifier constructions, whose word status is an unresolved problem in sign language literature (see chapter 8, Classifiers). Classifier constructions are often excluded from analyses of word classification because of their unclear status. We return to this issue in section 4. The lesson to be learned from the nature of signs and their components is that the line between the lexicon and the morphological component may be less definite than is usually assumed. Having raised the problematic issues, we now turn to those that are more straightforward within the realm of morphology. We examine which morphological operations are available to sign languages, and how these operations are used to distinguish between different types of words and to create new words.
3. Sign language morphological processes Morphology provides machinery for creating new words and for creating different forms of a word. The former is the realm of derivation, the latter of inflection. Derivational and inflectional processes differ in their productivity, regularity, and automaticity. Inflectional processes are regarded as regular and automatic, in that they apply to all members of a given category, while derivational processes are usually less regular and non-automatic (though, as with any linguistic categorization, this distinction is often blurred and not as dichotomous as it is presented). In spite of this functional difference, the morphological mechanisms used for both derivation and inflection are the same. The main three morphological operations are compounding, affixation, and reduplication. Words formed by such operations are complex, in the sense that they contain additional morphological content when compared to the bases they operate on. However, morphological complexity need not coincide with added phonological complexity, since morphological operations can be sequential or simultaneous. A sequential operation adds phonological segments onto a base, suffixes (as in baker) and prefixes (as in unhappy). In a simultaneous operation, meaningful units are added not by adding segments but rather by changing them. The plurality of feet, for example, is encoded by changing the quality of the vowel of the singular form foot. Both types of operation are found in spoken and in sign languages, but there is a difference in preference. In spoken languages, the sequential type is very common while simultaneous operations
81
82
II. Morphology
(a) (b) (c) Fig. 5.1: Three forms of the sign learn (Israeli SL): (a) base form (b) iterative (c) durational. Copyright © 2011 by Sign Language Lab, University of Haifa. Reprinted with permission.
are rarer. Sign languages, in contrast, show a marked preference towards simultaneous morphological operations. Sequential affixal morphology is very infrequent, and (apart from compounding) has been reported in only a few sign languages. This tendency towards simultaneous structuring characterizes all linguistic levels of sign languages, and has been attributed to the visuo-spatial modality (Emmorey 2002 and references cited there; Meier et al. 2002). Sequential morphology in the signed modality is quite similar to its spoken language counterpart: elements in a sequence (words and affixes) form a complex word by virtue of being linearly concatenated to one another. The Israeli SL compound volunteer is formed by combining the two signs heart and offer into a complex lexical unit. In the process, several changes, some of which are modality-driven, may take place, and these are described in section 5.1.1. But by and large, sequential operations in both modalities are quite similar. However, when turning to simultaneous morphology, the analogy is less clear. What would simultaneous morphology look like in a sign language? Which phonological features are changed to encode morphological processes? It turns out that it is the movement component of the sign that is the onemost exploited for morphological purposes. Take for example the sign learn in Israeli SL (Figure 5.1). The base form has a double movement of the hand towards the temple. Several repetitions of the sign with its double movement yield an iterative meaning ‘to study again and again’. If the sign is articulated with a slower and larger single movement, repeated three times, then the verb is inflected for a continuative aspect, meaning ‘to study for a long time’. A change in the movement pattern of a sign distinguishes nouns from formationally similar verbs in several sign languages (see section 4.4.1). Repetition of a noun sign in several locations in space denotes plurality (see chapter 6, Plurality). A change in the direction of a specific class of verbs (agreement verbs) indicates a change in the syntactic arguments of the verb in many sign languages (see chapter 7, Verb Agreement). In addition to change in movement, change in handshape with classifying verbs can also be analyzed as simultaneous inflection (and as a certain kind of verb-argument-agreement, see chapter 8, Classifiers). Thus simultaneous morphology in sign languages is implemented by changing features of the movement of the sign, and to a lesser degree by handshape change. It is
5. Word classes and word formation
83
simultaneous in the sense that it does not involve adding phonological segments. The signs ask and question are related to each other more like the English noun-verb pair cóntrast-contrást than the pair government-govern. Both signs consist of one syllable. They differ in the prosodic features imposed on the syllabic structure. This type of simultaneous morphology is often described as comparable to the templatic morphology characteristic of Semitic languages, where morphological distinctions are encoded by associating phonological material to different prosodic templates (Sandler 1989; Sandler/Lillo-Martin 2006). The two types of sign language morphology are characterized by different properties (Aronoff/Meir/Sandler 2005). Sequential operations are sparse; they are arbitrary in form; the affixes are related to free forms in the language and therefore can be regarded as being made grammatical from free words; they are derivational and less regular. Simultaneous operations are numerous; many of them are productive; they are related to spatial and temporal cognition, and most of them are non-arbitrary to various degrees. They can be inflectional or derivational. It follows, then, that there is partial correlation between simultaneity vs. sequentiality and the inflection vs. derivation dichotomy: sequential processes in sign languages are derivational. Simultaneous processes can be both inflectional and derivational. Thus inflection in sign languages is confined to being simultaneously instantiated. Derivational processes not only make use of simultaneous morphology, but also take the form of sequential morphology. These differences are summarized in Table 5.1. Both morphologies play a role in distinguishing word classes in sign languages and in deriving new lexical items. Tab. 5.1: Two types of sign-language morphology SIMULTANEOUS
SEQUENTIAL
⫺ Adds morphological material by changing features of formational elements (mainly the movement component)
⫺ Adds morphological material by adding phonological segments to a base
⫺ Preferred in the sign modality
⫺ Less preferred in the sign modality
⫺ Both inflectional and derivational
⫺ Only derivational
⫺ Numerous in different sign languages
⫺ Relatively sparse in different sign languages
⫺ Motivated to various degrees, related to spatial cognition
⫺ Tend to be more arbitrary
⫺ Not grammaticized from free words
⫺ Grammaticized from free words
4. Word classes 4.1. Introduction Word classes are often referred to as ‘parts of speech’, from Latin pars orationis, literally ‘piece of what is spoken’ or ‘segment of the speech chain’. Although the two terms
84
II. Morphology are used interchangeably in current linguistic practice (a practice which I follow in this chapter as well) it should be pointed out that, for the Greeks and Romans, the primary task was to divide the flow of speech into recognizable and repeatable pieces (hence parse). Categorizing was secondary to identification (Aronoff, p.c.). In this chapter, however, we will concern ourselves with categorization and classification. There are various ways to classify the words of a given language. However, the term ‘word classes’ usually refers to classification of words according to their syntactic and morphological behavior, e.g., the ability to appear in a certain syntactic environment, to assume a specific syntactic role (argument, predicate, modifier), and to co-occur with a particular set of inflectional affixes. Many of the words belonging to the same class also share some aspect of meaning. For example, words which typically occur in argument position and take number and case inflections often denote entities, whereas words occurring in predicate position and taking tense inflection often denote events. Yet there is no full overlap between a semantically based classification and a morphosyntactic one, making the classification of any given language challenging, and a crosslinguistic comparison even more so. The first major division of words in the lexicon is into content words and function words. Content word classes are generally open (i.e. they have large numbers and accept new members easily and regularly) and they tend to have specific meaning, usually extra-linguistic (they are used to refer to the world or to a possible world). They tend to be fairly long, and their text frequency is rather low (Haspelmath 2001). Function words usually belong to small and closed classes. They are usually defined by their function as they do not have concrete meaning, they tend to be quite short, and their text frequency is high. A few function word classes in sign languages are explored in other chapters of this volume: pronouns (chapter 11) and auxiliary verbs (chapter 10). Other function word classes mentioned in the sign language literature are numerals (see e.g., Fuentes/Tolchinsky 2004), question words and negative words (Zeshan 2004a,b; see also chapters 14 and 15). In this chapter the focus is on content class words. Function words will be mentioned only when they are relevant for diagnosing specific content class words. The major content word classes are nouns, verbs, adjectives, and adverbs. It is an empirical question whether this classification is universal, and whether the same set of criteria can be applied cross-linguistically to identify and define the different classes in every language. Clearly, languages vary greatly in their syntactic and morphological structures. Therefore syntactic and morphological criteria can be applied only on a language-particular basis. For a cross-linguistic study, a semantically-based classification would be much more feasible, since all languages presumably have words to refer to different concept classes such as entities, events, and properties. But, as pointed out above, semantic criteria often do not fully overlap with morpho-syntactic criteria for any particular language. The challenge, then, is to develop a set of criteria that would be descriptively adequate for particular languages, and at the same time would enable cross-linguistic comparison. As Haspelmath (2001) points out, the solution that is usually adopted (often implicitly) is to define word classes on a language-particular basis using morpho-syntactic criteria, and then use semantic criteria for labeling these classes: the word class that includes most words for things and persons is called ‘noun’; the one that includes most words for actions and processes is called ‘verb’; etc. It is also usually the case that the correspondences ‘thing-noun’ and ‘action-verb’ are the
5. Word classes and word formation unmarked extension of the respective word class. Marked extensions are often indicated by derivational affixes. This methodology implicitly assumes some kind of semantic basis for word classification, and that this basis is universal. Such assumptions should be tested by studying languages that are typologically diverse as much as possible. Sign languages, as languages produced in a different modality, constitute a very good test case.
4.2. Word classes in the signed modality Sign languages, like spoken languages, have lexicons consisting of lexemes of different types that refer to different notions (entities, actions, states, properties, etc.) and combine with each other to form larger units, phrases, and sentences. However, as a group, sign languages differ from spoken languages in three major respects relevant for the present discussion. Firstly, and most obviously, they are articulated and transmitted in a different modality from spoken languages. Secondly, sign languages as a group are much younger than spoken languages. And finally, the field of sign language linguistics is young, having emerged only a few decades ago. The modality difference raises several questions: (i) Would languages in a different modality display different kinds of word classes? For example, would the spatial nature of sign languages give rise to a word class that denotes spatial relations? (ii) Would iconicity play a role in differentiating between word classes? (iii) Do languages in a different modality have different set of properties to distinguish between word classes? (iv) Do we need to develop a totally different set of tools to categorize signs? Sign languages as a group are also much younger than spoken languages. Spoken languages are either several millennia or several hundred years old, or they are derived from old languages. In contrast, the oldest sign languages known to us today are about 300 years old or so (for BSL, see Kyle and Woll 1985; for French Sign Language (LSF), see Fischer 2002) and some are much younger: Israeli SL is about 75 years old (Meir/ Sandler 2008), and Nicaraguan Sign Language (ISN) is about 35 years old (Senghas 1995). It may very well be that sign languages existed in older times, but they left no records and therefore cannot be studied. All we know about sign languages comes from studying the sign languages available to us today, and these are young. Young spoken languages, creoles, are characterized by dearth of inflectional morphology (McWhorter 1998). Furthermore, the lexicons of both creoles and pidgins are described as consisting of many multifunctional words, that is, words used both as nouns and verbs, or nouns and adjectives. For example, askim in Tok Pisin can function both as a noun and as a verb (Romaine 1989, 223). As we shall see, multifunctionality is characteristic of sign languages as well. Therefore, word classification in young languages cannot rely on morphology. These two factors, modality and young age, contribute to the fact that sign languages as a group form a distinct typological morphological type (Aronoff/Meir/Sandler 2005). As new languages they hardly have any sequential morphology. They lack nominal
85
86
II. Morphology inflections such as case and gender inflections. They also do not have tense inflections on verbs. These inflectional categories are key features in determining word classes in many spoken languages (though, of course, many spoken languages lack such inflectional categories, and therefore similar difficulties for word classification arise). On the other hand, as visuo-spatial languages, they are characterized by the rich spatial (simultaneous) morphology described in section 3. Can spatial modulations play a role in determining word classes as morphological inflections of spoken languages? Would they identify the same word classes found in spoken languages? In addition to the youth of the languages, the field of sign language linguistics is also new, dating back to the early 1960s. In analyzing the linguistic structure of sign languages, sign linguists often rely on theories and methodologies developed on the basis of spoken languages. Since linguistics as a field is much older than sign linguistics, it makes sense to rely on what is known about how to study spoken languages. It also has the advantage of making it possible to compare findings in the two types of languages. However, it runs the risk of analyzing sign languages through the lens of spoken languages, and missing important phenomena if they are unique to sign languages (see, e.g., Slobin 2008 on this issue). These three factors ⫺ modality, youth of language, and youth of field ⫺ make the study of word classes in sign languages challenging and non-trivial. Indeed systematic studies of word classification in sign languages are very few. Though terms such as noun, verb, adjective, pronoun, etc. are abundant in the sign language literature, there have been very few attempts at principled word classification of any studied sign language, and very few researchers explicitly state on what grounds the terms ‘noun’, ‘verb’, etc. are used. However, as the sign language linguistics field expands, more linguistic operations and structures are discovered which can be helpful in determining word classes in sign languages. We turn to look at some classifications that have been suggested, and to examine the means by which sign languages differentiate between word classes.
4.3. Word classifications suggested for sign languages The earliest attempt to provide criteria for identifying word classes of a sign language lexicon is found in Padden (1988). She suggests the following criteria for identifying the three major content word classes in ASL: Nouns can be modified by quantifiers, adjectives can inflect for intensive aspect, and verbs cannot be pre-modifiers of other signs. Under this classification, nouns and verbs are defined on distributional syntactic grounds, and adjectives on morphological grounds. Notice that verbs are only defined negatively, probably because there is no inflection common to all and only verbs in the language. Also, it is not clear that this set of criteria applies to all and only the members of a certain class. Zeshan (2000) suggests a word classification of IPSL according to the spatial characteristics of signs. One class consists of signs that cannot move in space at all, a second class consists of signs that are produced in neutral space and can be articulated in various locations in space, and the third class consists of directional signs, that is signs that move between locations in space associated with referents. The criterion of spatial behavior is clearly modality specific, since words in spoken languages do not have
5. Word classes and word formation
87
spatial properties. Therefore, such an analysis, even if it provides a descriptively adequate analysis of a particular language, does not allow for cross-modality comparisons and generalizations. In addition, it is not clear whether such a classification has any syntactic and semantic corollaries within the language. For example, the class of signs that cannot move in space includes signs meaning ‘understand’, ‘woman’ and ‘I’ (Zeshan 2000, 58). These signs do not seem to have any semantic commonality, and it is doubtful whether they have anything in common syntactically. Therefore, the usefulness of this classification does not extend beyond a purely formational classification. Recently, a comprehensive and methodological attempt to establish a set of criteria for defining word classes in sign languages has been posited by Schwager and Zeshan (2008). Their goal is to develop a cross-linguistically applicable methodology that would give adequate descriptive results for individual languages. They explicitly take the semantics as a starting point, since the semantic classification is cognitively-based and hence language independent. They compile a set of binary semantic features that define three basic concept classes: entity, event, and property. After assigning signs to different classes based on their semantics, Schwager and Zeshan proceed to examine how signs in each concept class map to syntactic roles and morphological operations. Four basic syntactic roles are listed: argument, predicate, argument modifier, and predicate modifier. As for morphological criteria, a list of 17 morphological processes that have been described in the sign linguistics literature is compiled. These processes are classified according to the concept classes they co-occur with. In order to test the validity of their approach, they apply it to corpora compiled from three unrelated sign languages: German Sign Language (DGS), Russian Sign Language (RSL), and Sign Language of Desa Kolok (KK), a sign language that developed in a small village community in Bali with high incidence of hereditary deafness. Words with comparable meanings were identified and extracted from the corpora, and were analyzed according to the procedure described above. This comparison pinpoints both similarities and differences between the languages. Even at the semantic level, signs referring to similar concepts may not belong to the same concept class in the two languages. For example, the sign deaf in DGS may refer to a person or a property, while in KK it refers only to a person. Therefore, in DGS this sign will be listed both as an entity and as a property, while in KK it is classified only as an entity. In considering the combination of concept classes with syntactic roles, some more interesting differences emerge. DGS, but not KK, has event signs that can be used in argument position. The sign work, for example, can be used in predicate position, but also in argument position, as in (1) (Schwager/Zeshan 2008, 534, example 26). Also, in DGS signs denoting properties can assume a modifier or a predicate position, whereas in KK they are restricted to predicate position. (1)
work find difficult#ints(intensive) ‘It is very difficult to find a job.’
[DGS]
The list of morphological modulations serves as a useful tool for identifying the morphological nature of different sign languages. KK has far fewer morphological processes than DGS and RSL, especially in the event class. Of the 13 processes listed for events, KK has only 3, while DGS and RSL have 11 each. Therefore KK is much more isolating than the two other languages, and morphological operations are much less helpful in establishing word classes in this language.
88
II. Morphology These results show that, as in spoken languages, different sign languages vary in terms of their word classes. However, it might be that the variation in the signed modality is less extreme than that found among languages in the spoken modality. Further comparative studies of sign languages, and of sign vs. spoken languages, is needed to assess this intuitive observation. One type of evidence that is not used in their analysis is distributional evidence, such as the co-occurrence of signs with certain function word classes. Distributional properties are language-specific, and hinge on identifying the relevant function words and syntactic environments for each language. Yet some cross-linguistic generalizations can be made. For examples, nouns are more likely to co-occur with pointing signs (often termed index or ix), and can serve as antecedents for pronouns. Verbs are more likely to co-occur with auxiliary verbs. As I point out below, some such observations have already been made for different languages, and it is hoped that they will be incorporated in future investigations of sign language word classes. In spite of the lack of distributional evidence, Schwager and Zeshan’s analysis shows that it is possible to arrive at a systematic, theoretically sound approach to word classification in sign languages. Such an analysis provides descriptions of word classes of specific languages, but also allows for cross-linguistic and cross-modality comparisons.
4.4. Means for differentiating between specific word classes Though very few works try to establish general criteria for determining word classes of the entire lexicon of a sign language, many works target more restricted domains of the lexicon, and describe certain structures and processes that apply to specific classes or sub-parts of classes. These involve both morphological and distributional criteria.
4.4.1. Noun-verb pairs Descriptions of various sign languages often comment that many signs are multifunctional, and can serve both as a nominal and as a verb (denote an entity or an event). This is not surprising given the young age of sign languages, but it has also been argued to be modality driven. The following paragraph is from an introduction to the first dictionary of Israeli SL (Cohen/Namir/Schlesinger 1977, 24): Two concepts which in spoken language are referred to by words belonging to different parts of speech will often have the same sign in sign language. The sign for sew is also that for tailor, namely an imitation of the action of sewing ... eat and food are the same sign ... and to fish is like fisherman ... In English, as in many other languages, words of the same root belonging to different parts of speech (like ‘bake’ and ‘baker’) are often distinguished inflectionally. They are denoted by the same sign in sign language since it has neither prefixes nor suffixes. These, being noniconic, would seem to be out of tune with a language in which many signs have some degree of transparency of meaning, and are therefore unlikely to arise spontaneously in a sign language.
5. Word classes and word formation
a. ASL: chair
b. Israeli SL: question
89
sit
ask
Fig. 5.2: a. ASL noun-verb pair: chair-sit; b. Israeli SL noun-verb pair: question-ask. Figure a reprinted wth permissions from Padden (1988). Figure b Copyright © 2011 by Sign Language Lab, University of Haifa. Reprinted with permission.
Given the propensity of sign languages towards iconicity, and the non-iconicity of sequential derivational affixes, those affixes comparable to, e.g., -tion, -ize, and -al in English are not expected to be found in sign languages. Yet several studies of nounverb pairs show that it is not impossible to distinguish formationally between word classes in a sign language. However, one has to know what to look for. It turns out that subtle differences in the quality of the movement component of certain signs may indicate the word class of specific signs. The first work to show that nouns and verbs may exhibit systematic formational differences is Supalla and Newport (1978). They describe a set of 100 related nounverb pairs, where the nouns denote an instrument, and the verb an action performed with or on that instrument, e.g., scissors and cut-with-scissors, chair and to-sit (see Figure 5.2a) or iron and to-iron. These pairs differ systematically in the properties of
90
II. Morphology the movement component: in nouns it is reduplicated, restricted, and constrained; the movement of the related verbs is not. Following their seminal work, similar phenomena have been attested in various sign languages. Sutton-Spence and Woll (1999, 109) report that in BSL noun-verb pairs, e.g., sermon-preach, nouns have a restrained, abrupt end and verbs do not. This specific example shows that signs exhibiting this alternation are not necessarily restricted to instrument-action pairs. Similarly, in Israeli SL formationally related nouns and verbs, the verbs typically have a longer movement, as in question vs. ask (Meir/Sandler 2008, see Figure 5.2b). In Russian Sign Language as well, qualities of the movement component were the most reliable properties distinguishing nouns from verbs (Kimmelman 2009): nouns but not verbs (in noun-verb pairs) tend to have repeated movements, and verbs tend to have wider movement amplitude than the corresponding nouns. Johnston (2001) provides an explanation for the repeated movement of nouns but not their paired verbs in Auslan. In this language, the best exemplars of the alternation are signs referring to actions which are inherently reversible, such as open-shut (e.g., turning a knob, opening and shutting a drawer, turning a key). The signs representing these actions and entities are iconic, their direction of movement depicting the direction of the action. It is this iconicity that is the basis for the noun-verb distinction: a single movement in one of the two possible directions is interpreted as a process (one of the two possible processes), while a repeated bi-directional movement is interpreted as naming a salient participant in the action, the participant on which the action in both directions is performed (the knob, the drawer, or the key in the actions mentioned above). The formational difference between nouns and verbs may be rooted in iconicity, as suggested by Johnston, but in some sign languages this formational difference has expanded to non-iconic cases as well, suggesting that the form is taking a life of its own. Hunger (2006) measured the duration (in terms of numbers of frames) of 15 nounverb pairs in Austrian Sign Language (ÖGS) both in isolation and in connected speech. Her results show that verbs do indeed take twice as long to produce as nouns. Interestingly, the longer duration of verbs characterizes even verbs which are not inherently durational (e.g., book-open, photograph, lock). Therefore, Hunger concludes that the longer duration of verbal signs cannot be attributed to iconicity effects. Rather, this formational difference “can be interpreted as a distinctive marker for verbal or nominal status” (p. 82). The lesson to be learned from these studies is that word classes can be distinguished formationally in the signed modality, by recruiting the movement component of signs for the task. Although this device may be rooted in iconicity, in some languages it seems to have already extended beyond the iconically-based set of signs, and is on its way to becoming a formal morphological entity.
4.4.2. Inflectional modulations One of the most commonly used criteria for determining word classes in spoken languages is morphological inflections. Inflectional affixes are very selective with respect to the lexical base they attach to (Zwicky/Pullum 1983). A group of words that take a particular inflectional affix can therefore be regarded as belonging to one class. Notice,
5. Word classes and word formation however, that the term ‘affix’, which is commonly used for a concrete sequential morpheme, can be also used to refer to a process or a change in features that is expressed simultaneously on the inflected word. In sign languages, inflections take the form of modulations to the movement component of the sign. Numerous inflections have been described in the literature, the main ones being: Verbs:
(a) Encoding arguments: verb agreement; reciprocal; multiple; exhaustive. (b) Aspect: habitual; durational; continuative; iterative; protractive; delayed completive; gradual. Nouns: plurality. Predicative adjectives: pre-dispositional; susceptative; continuative; intensive; approximative; iterative; protractive. What all these inflections have in common is that they make use of the movement component of the sign in order to encode specific grammatical categories. For example, the intensive inflection of adjectives in Israeli SL imposes lengthening of the movement on the base sign (Sandler 1999). In ASL this inflection takes the form of increased length of time in which the hand is held static for the first and last location (Sandler 1993, 103⫺129). Many aspectual modulations, such as the durational and iterative, impose reduplicated circular movement on the base sign. Most of the inflections occur on verbs and adjectives, suggesting that inflectional modulations are restricted to predicate position. Since several inflections occur on both verbs and adjectives (e.g., continuative, iterative, protractive), it may be that these inflections are diagnostic of a syntactic position more than a specific word class. This, however, should be determined on a language-specific basis. The use of these inflections for determining word classes is somewhat problematic. Firstly, morphological classes often do not coincide with concept classes. No single morphological operation applies across the board to all members of a particular concept class. For example, Klima and Bellugi (1979) describe several adjectival inflections, but these co-occur only with adjectives denoting a transitory state. Verb agreement, which in many spoken languages serves as a clear marker of verbs, characterizes only one sub-class of verbs in sign languages, agreement verbs. Secondly, many of these operations are limited in their productivity, and it is difficult to determine whether they are derivational or inflectional (see Engberg-Pedersen 1993, 61⫺64, for Danish Sign Language (DSL); Johnston/Schembri 1999, 144, for Auslan). Thirdly, since all these inflections involve modulation of the movement component, sometimes their application is blocked for phonological reasons. Body anchored verbs, for instance, cannot inflect for verb agreement. Inflectional operations, then, cannot serve by themselves as diagnostics for word classes. But, as in spoken languages, they can help in establishing word classes for particular languages, with corroborative evidence from semantic, syntactic, and distributional facts.
4.4.3. Word-class-determining affixes Although a language may lack formational features characterizing the part of speech of base words, it may still have certain derivational affixes that mark the resulting word
91
92
II. Morphology as belonging to a certain part of speech. The forms of English chair, sit, and pretty do not indicate that they are a noun, a verb, and an adjective respectively. But nation, nationalize and national are marked as such by the derivational suffixes -tion, -ize, and -al in their form. Can we find similar cases in sign languages? In general, sequential affixation is quite rare in sign languages, as discussed above. Of the descriptions of affixes found in the literature, very few refer to the part of speech of the resulting words. Two relevant affixes are described in Israeli SL, and two in Al-Sayyid Bedouin Sign Language (ABSL), a language that emerged in a Bedouin village in Israel in the past 70 years. Aronoff, Meir and Sandler (2005) describe a class of prefixes in Israeli SL that derive verbs. This class includes signs made by pointing either to a sense organ ⫺ the eye, nose, or ear ⫺ or to the mouth or head. Many of the complex words formed with them can be glossed ‘to X by seeing (eye)/hearing (ear)/thinking (head)/intuiting (nose)/saying (mouth)’, e.g., eye+check ‘to check something by looking at it’; nose+sharp ‘discern by smelling’; mouth+rumors ‘to spread rumors’. But many have idiosyncratic meanings, such as nose+regular ‘get used to’ and eye+catch ‘to catch red handed’ (see Figure 5.3). Although the part of speech of the base word may vary, the resulting word is almost always used as a verb. For example, the word eye/nose+sharp means ‘to discern by seeing/smelling’, though sharp by itself denotes a property. In addition to their meaning, distributional properties of these complex words also support the claim that they are verbs: they cooccur with the negative sign glossed as zero, which negates verbs in the language. Aronoff, Meir and Sandler conclude that the prefixes behave as verb-forming morphemes.
eye
catch
Fig. 5.3: Israeli SL sign with a verb-forming prefix: eye+catch ‘to catch red handed’. Copyright © 2011 by Sign Language Lab, University of Haifa. Reprinted with permission.
Another Israeli SL affix is a suffix glossed as -not-exist, and its meaning is more or less equivalent to English -less (Meir 2004; Meir/Sandler 2008, 142⫺143). This suffix attaches to both nouns and adjectives, but the resulting word is invariably an adjective: important+not-exist means ‘of no import’, and success+not-exist ‘without success, unsuccessful’. The main criterion for determining word class in this case is semantic: the complex word denotes a property (‘lacking something’).
5. Word classes and word formation
a.
b.
pray
drink
93
there
tea+round-object
Fig. 5.4: Two ABSL complex words with suffixes determining word class: a. Locations: pray+there ‘Jerusalem’; b. Objects: drink-tea+round-object ‘kettle’. Copyright © 2011 by Sign Languge Lab, University of Haifa. Reprinted with permission.
An interesting class of complex words has been described in ABSL, whose second member is a pointing sign, indicating a location (Aronoff et al. 2008; Meir et al. 2010). The complex words denote names of locations ⫺ cities and countries, as in longbeard+there ‘Lebanon’, head-scarf+there ‘Palestinian Authority’, pray-there ‘Jerusalem’ (see Figure 5.4a). If locations are regarded as a specific word class, then these words contain a formal suffix indicating their classification (parallel to English -land or -ville).
94
II. Morphology Finally, another set of complex words in ABSL refers to objects, and contains a component indicating the relative length and width of an object by pointing to various parts of the hand and arm, functionally similar to size and shape specifiers in other sign languages (Sandler et al. 2010; Meir et al. 2010). The complex signs refer to objects, and are therefore considered as nouns, though the base word may be a verb as well: cut+long-thin-object is a knife, drink-tea+round-object is a kettle (Figure 5.4b).
4.4.4. Co-occurrence with function words Function words are also selective about their hosts. Therefore, restrictions on their distribution may serve as an indication of the word class of their neighbors. Padden (1988) defines the class of nouns on distributional grounds, as the class of signs that can be modified by quantifiers. Hunger (2006), after establishing a formational difference between nouns and verbs in ÖGS, notices that there are some distributional corollaries: modal verbs tend to occur much more often next to verbs than next to nouns. On the other hand, indices, adjectives, and size and shape classifiers (SASS) are more often adjacent to nouns than to verbs. Another type of function words that can be useful in defining word classes is the class of negation words. Israeli SL has a large variety of negators, including, inter alia, two negative existential signs (glossed as neg-exist-1, neg-exist-2) and two signs that are referred to by signers as ‘zero’ (glossed as zero-1, zero-2). It turns out that these two pairs of signs have different co-occurrence restrictions (Meir 2004): the former cooccurs with nouns (signs denoting entities, as in sentence (2), below), the latter with verbs (signs denoting actions, as in sentence 3). In addition, signs denoting properties are negated by not, the general negator in the language, and cannot co-occur with the other negators (sentence 4). (2)
ix1computer neg-exist-1/*zero-1/2/*not ‘I don’t have a computer.’
(3)
ix3 sleep zero1/2/*neg-exist-1/2 ‘He didn’t sleep at all/He hasn’t slept yet.’
(4)
chair ixA comfortable not/*zero-1/2/*neg-exist-1/2 ‘The chair is/was not comfortable.’
[Israeli SL]
Finally, in Israeli SL a special pronominal sign evolved from the homophonous sign person, and is in the process of becoming an object clitic, though it has not been fully grammaticalized yet (Meir 2003, 109⫺140). This sign co-occurs with verbs denoting specific types of actions, but crucially it attaches only to verbs. This conclusion is supported by the fact that all the signs that co-occur with this pronominal sign are also negated by the zero signs described above.
4.4.5. Co-occurrence with non-manual features Non-manual features such as facial expressions, head nod, and mouthing play various grammatical roles in different sign languages (Sandler 1999). In this, they are quite
5. Word classes and word formation similar to function words, and their distribution may be determined by the word class of the sign they co-occur with. In various sign languages, some facial expressions have been described as performing adverbial functions, modifying actions or properties (e.g., ASL: Baker/Cokely 1980; Liddell 1980; Anderson/Reilly 1998; Wilbur 2000; Israeli SL: Meir/Sandler 2008; BSL: Sutton-Spence/Woll 1999). These facial expressions can be used as diagnostic for word classes, since their meaning is clearly compatible with specific concept classes. Israeli SL has facial expressions denoting manner such as ‘quickly’, ‘meticulously’, ‘with effort’, ‘effortlessly’, which modify actions, and can be used as diagnostics for verbs. In some sign languages (i.e., many European sign languages) signers often accompany manual signs with mouthing of a spoken language word. Mouthing turns out to be selective as well. In the studies of noun-verb pairs in ÖGS and Auslan, it was noticed that mouthing is much more likely to occur with nouns rather than with verbs. In ÖGS, 92% percent of the nouns in Hunger’s (2006) study were accompanied by mouthing, whereas only 52% of the verbs were. In Auslan, about 70% of the nouns were accompanied by mouthing, whereas only 13% of the verbs were (Johnston 2002).
4.4.6. Conclusion At the beginning of this section we questioned whether sign languages are characterized by a different set of word classes because of their modality. We showed that it is possible to arrive at a theoretically based classification that can be applied to both types of languages, using similar types of diagnostics: meaning, syntactic roles, distribution, morphological inflections, and derivational affixes. The main diagnostics discussed in this section are summarized in Table 5.2 below. The main content classes, nouns, verbs, and adjectives, are relevant for languages in the signed modality as well. On the other hand, there are at least two types of signs that are clearly spatial in nature: one is classifier construction (see chapter 8), whose word class status has not been determined yet, and might turn out to require different classification altogether. The other type consists of two sub-classes of verbs, agreement verbs and spatial verbs, the classes of verbs that ‘move’ in space to encode agreement with arguments or locations. These classes are also sign language specific, though they belong to the larger word class of verbs. Are there any properties related to word classes that characterize sign languages as a type? Firstly, more often than not, the form of the sign is not indicative of its part of speech. For numerous sign languages, it has been observed that many signs can be used both as arguments and as predicates, denoting both an action and a salient participant in the action, and often a property as well. This is, of course, also true of many spoken languages. Secondly, morphological inflection is almost exclusively restricted to predicate positions. Nominal inflections such as case and gender are almost entirely lacking (for number see chapter 6, Plurality). Thirdly, space plays a role in determining sub-classes within the class of verbs; although not all sign languages have the tri-partite verb classification into agreement, spatial, and plain verbs, only sign languages have it. It is important to note that there are also differences between individual sign languages. The sequential affixes determining word classes are clearly language specific, as are the co-occurrence restrictions on function words. Inflectional modulations, which
95
96
II. Morphology Tab. 5.2: Main diagnostics used for word classification in different sign languages Nouns
Verbs
Adjectives
semantic
Concept class
Entity
Event
Property
syntactic
Syntactic position
Argument Predicate
Predicate
Modifier Predicate
Syntactic co-occurrences
Quantifiers Specific negators Determiners
Specific negators Pronominal object clitic
Formational characterization
Short and/or reduplicated movement (with respect to comparable verbs)
Longer non-reduplicated movement (with respect to comparable nouns)
Inflectional modulations
Plurality
(a) Encoding arguments: verb agreement; reciprocal; multiple; exhaustive. (b) Aspect: habitual; durational; continuative; iterative; protractive; delayed completive; gradual.
Predispositional; susceptative; continuative; intensive; approximative; iterative; protractive.
Word-class determining affixes
SASS suffixes
‘sense’-prefixes
Negative suffix (‘not-exist’)
Co-occurrence with facial expressions
Mouthing
Adverbial facial expressions
morphological
are pervasive in sign languages, also vary from one language to another. Not all sign languages have verb agreement. Aspectual modulations of verbs and adjectives have been attested in several sign languages. Specific modulations, such as the protractive, predispositional, and susceptative modulations, have been reported of ASL, but whether or not they occur in other sign languages awaits further investigation.
5. Word formation Morphology makes use of three main operations: compounding, affixation, and reduplication. These operations can be instantiated sequentially or simultaneously. The visuo-spatial modality of sign languages favors simultaneity, and offers more possibili-
5. Word classes and word formation ties for such structures and operations, which are highlighted in each of the following sub-sections. Three additional means for expanding the lexicon are not discussed in this chapter. The first is borrowing, which is discussed in chapter 35. The second is conversion or zero-derivation, that is, the assignment of an already existing word to a different word class. As mentioned above, many words in sign languages are multifunctional, serving both as nouns and verbs or adjectives. It is difficult to determine which use is more basic. Therefore, when a sign functions both as a noun and as a verb, it is difficult to decide whether one is derived from the other (which is the case in conversion), or whether the sign is unspecified as to its word-class assignment, characteristic of multifunctionality. Finally, backformation is not discussed here, as I am not aware of any potential case illustrating it in a sign language.
5.1. Compounding A compound is a word composed of two or more words. Compounding expands vocabulary in the language by drawing from the existing lexicon, using combinations of two or more words to create novel meanings. Compounding seems to be necessarily sequential, as new lexical units are formed by the sequential co-occurrence of more basic lexical items. Yet sign languages may potentially offer simultaneously structured compounds too. Since the manual modality has two articulators, the two hands, compounds may be created byarticulating two different signs simultaneously, one with each hand. We will discuss sequential compounds first, and then turn to examine several structures that could be regarded as simultaneous compounding.
5.1.1. Sequential compounding Compounds are words. As such, they display word-like behavior on all levels of linguistic analysis. They tend to have the phonological features of words rather than phrases. For example, in English and many other languages, compounds have one word stress (e.g., a gréenhouse), like words and unlike phrases (a greén hóuse). Semantically, the meaning of a compound is often, though not always, non-compositional. A greenhouse is not a house painted green, but rather “a building made mainly of glass, in which the temperature and humidity can be regulated for the cultivation of delicate or out-of-the season plants” (Webster’s New World Dictionary, Third College Edition). It is usually transparent and not green. Syntactically, a compound behaves like one unit: members of a compound cannot be interrupted by another unit, and they cannot be independently modified. A dark greenhouse is not a house painted dark green. These properties of compounds may also serve as diagnostics for identifying compounds and distinguishing them from phrases. Properties of sign language compounds: Sign languages have compounds too. In fact, this is the only sequential morphological device that is widespread in sign languages. Some illustrative examples from different languages are given in Table 5.3. As in spoken languages, sign language compounds also display word-like characteristics. In their
97
98
II. Morphology seminal study of compounds in ASL, Klima and Bellugi (1979, 207⫺210) describe several properties that are characteristic of compounds and distinguish them from phrases. Firstly, a quick glance at the examples in Table 5.3 shows that the meaning of compounds in many cases is not transparent. The ASL compound blue^spot does not mean ‘a blue spot’, but rather ‘bruise’. heart^suggest (in Israeli SL) does not mean ‘to suggest one’s heart’ but rather ‘to volunteer’, and nose^fault (‘ugly’ in Auslan) has nothing to do with the nose. Since the original meaning of the compound members may be lost in the compound, the following sentences are not contradictory (Klima/ Bellugi 1979, 210): (5)
blue^spot green, vague yellow ‘That bruise is green and yellowish.’
(6)
bed^soft hard ‘My pillow is hard.’
[ASL]
Compounds are lexicalized in form as well. They tend to have the phonological appearance of a single sign rather than of two signs. For example, they are much shorter than the equivalent phrases (Klima/Bellugi 1979, 213), because of reduction and deletion of phonological segments, usually the movement of the first segment. The transitory movement between the two signs is more fluid. In some cases, the movement of the
Tab. 5.3: Examples of compounds in sign languages ASL (Klima/Bellugi 1979)
bed^soft face^strong blue^spot sleep^sunrise
‘pillow’ ‘resemble’ ‘bruise’ ‘oversleep’
BSL (Brennan 1990)
think^keep see^never work^support face^bad
‘remember’ ‘strange’ ‘service’ ‘ugly’
Israeli SL (Meir/Sandler 2008)
fever^tea heart^offer respect^mutuality
‘sick’ ‘volunteer’ ‘tolerance’
Auslan (Johnston/Schembri 1999)
can’t^be-different red^ball nose^fault
‘impossible’ ‘tomato’ ‘ugly’
ABSL (Aronoff et al. 2008)
car^light pray^house sweat^sun
‘ambulance’ ‘mosque’ ‘summer’
IPSL (Zeshan 2000)
father^mother understand^much potato^various
‘parents’ ‘intelligent’ ‘vegetable’
New Zealand Sign Language (NZSL) (Kennedy 2002)
no^germs make^dead ready^eat
‘antiseptic’ ‘fatal’ ‘ripe’
5. Word classes and word formation
Fig. 5.5: The ASL signs (a) think and (b) marry, and the compound they form, (c) believe. Reprinted with permission from Sandler and Lillo-Martin (2006).
second component is also deleted, and the transitory movement becomes the sole movement of the compound, resulting in a monosyllabic sign with only one movement, like canonical simplex signs (Sandler 1999). Changes contributing to the ‘single sign’ appearance of compounds are not only in the movement component, but also in hand configuration and location. If the second sign is performed on the non-dominant hand, that hand takes its position at the start of the whole compound. In many cases, the handshape and orientation of the second member spread to the first member as well (Liddell/Johnson 1986; Sandler 1989, 1993). Similar phenomena have been attested in Auslan as well (Johnston/Schembri 1999, 174). They point out that in lexicalized compounds often phonological segments of the components are deleted, and therefore they might be better characterized as blends. As a result of the various phonological changes that can take place, a compound may end up looking very much like a simplex sign: it has one movement and one hand configuration. In the ASL compound believe (in Figure 5.5), for example, the first location (L1) and the movement (M) segments of the first member, think, are deleted. The second location (L2) becomes the first location of the compound, and the movement and final location segments are those of the second member of the compound,
99
100
II. Morphology marry. The only indication that believe is a compound is the fact that it involves two major locations, the head and the non-dominant hand, a combination not found in simplex signs (Battison 1978). These phonological changes are represented in (7), based on Sandler (1989): (7)
The phonological representation of the ASL compound believe
Morphological structure: Compounding takes advantage of linear structure, but it also involves reorganization and restructuring. The members of a compound may exhibit different types of relationship. Endocentric compounds are those that have a head. The head represents the core meaning of the compound and determines its lexical category. The English compound highchair is endocentric, headed by the noun chair. Semantically, a highchair is a type of a chair, and morphologically it is a noun, the lexical category of its head. A compound such as scarecrow is exocentric: it is neither a ‘crow’ nor a ‘scare’. Endocentric compounds are further classified according to the position of the head in the compound: right-headed (the head occurs in final position, as in highchair) and left-headed (the head occurs in initial position, as in Hebrew ganyeladim ’kindergarten’, literally ’garden-children’). It is commonly assumed that the position of the head in compounds is systematic in a language (Fabb 1998). English, for example, is characterized as right-headed, while Hebrew is left-headed. Not much has been written on headedness in sign language compounds. Of the ASL examples presented in Klima and Bellugi, many are exocentric, e.g., sure^work ‘seriously’, will^sorry ‘regret’, wrong^happen ‘accidentally’, face^strong ‘resemble’, wrong^happen ‘fate’. Most of the endocentric compounds described there are leftheaded, eat(food)^noon ‘lunch’, think^alike ‘agree’, flower^grow ‘plant’, sleep^ sunrise ‘oversleep’, but at least one, blue^spot ‘bruise’, is right-headed. In Israeli SL, compounds that have Hebrew counterparts are usually left-headed (party^surprise ‘surprise party’), though for some signers they may be right-headed. Compounds that do not have Hebrew counterparts are often exocentric, e.g., fever^tea ‘sick’, swing^play ‘playground’. Verbal compounds are often right-headed, as in heart^ suggest ‘volunteer’, and bread^feed ‘provide for’. A third type of compound structure is the coordinate compound, where the members are of equal rank, as in hunter-gatherer, someone who is both a hunter and a gatherer. In a special type of coordinate compounds, the members are basic categorylevel terms of a superordinate term. The meaning of the compound is the superordinate term. This class of compounds, called also dvandva compounds (etymologically derived from Sanskrit dvamdva, literally, a pair, couple, reduplication of dva two), is not productive in most modern European languages, but occurs in languages of other families. Such compounds exist in ASL (Kilma/Bellugi 1979, 234⫺235): car^ plane^train ‘vehicle’, clarinet^piano^guitar ‘musical instrument’, ring^bracelet^ necklace ‘jewelry’, kll^stab^rape ‘crime’, mother^father^brother^sister ‘family’. Like other compounds, they denote one concept, the movement of each component sign is reduced, and transitions between signs are minimal. However, there is a lot of individual variation in form and in the degree of productivity of these forms. Younger signers use them very little, and consider them to be old-fashioned or even socially stigmatized.
5. Word classes and word formation
5.1.2. Simultaneous compounding In principle, simultaneous compounding in sign languages can be of two types. In the first, each hand may produce a different sign, but the production is simultaneous. The second type combines certain phonological parameters from two different sources to create a single sign. In the latter type not all the phonological specifications of each compound member materialize, and therefore they may also be characterized as blends. Examples of the first type are exceedingly rare. Two BSL examples are mentioned in the literature: minicom (a machine which allows typed messages to be transmitted along a telephone line, in Brennan 1990, 151), and space-shuttle (Sutton-Spence/Woll 1999, 103). The compound minicom is composed of the sign type and the sign telephone produced simultaneously: the right hand assumes the d handshape of the sign telephone, but is positioned over the left hand that produces the sign type. However, according to some analyses, simultaneous compounding is very widespread in sign languages. Brennan (1990) uses the term ‘classifier compounds’ for signs in which the non-dominant hand, and sometimes both hands, assumes a handshape of a classifier morpheme. For example, in the sign aquadiver the non-dominant hand in a [ handshape represents a surface, and the dominant hand in an upright W handshape moving downwards represents a person moving downwards from the surface. According to Brennan’s analysis, any sign containing a classifier handshape on the non-dominant hand is a compound, even some so-called ‘frozen’ lexical items. A sign such as write (in Israeli SL and many other sign languages), whose dominant hand has a K handshape depicting the handling of a long thin object and moving it over a flat surface (represented by the non-dominant hand) is also a classifier compound under this account. Johnston and Schembri (1999, 171) refer to such constructions as “simultaneous sign constructions” rather than compounds, because they point out that such constructions may be phrasal or clausal. It should be pointed out that however these signs originated, they are lexical signs in every respect, and under most analyses, they are not regarded synchronically as compounds. Two types of word formation process combine handshape from one source and movement and location from another: numeral incorporation, where the handshape represents a number (Stokoe et al. 1965; Liddell 1996 and works cited there), and initialization, in which the handshape is drawn from the handshape inventory of the manual alphabet (Stokoe et al. 1965; Brentari/Padden 2001). In addition, these processes are not usually analyzed as compounds, but rather as some kind of incorporation, affixation, or combination of two bound roots (e.g., Liddell 1996 on numeral incorporation). Whatever the analysis, they both combine elements from two sources, and in this they resemble compounding, but they do so simultaneously, a possibility available only for languages in the signed modality. Numeral incorporation is usually found in pronominal signs and in signs denoting time periods, age, and money. In these signs the number of fingers denotes quantity. For example, the basic form of the signs hour, day, week, month, and year in Israeli SL is made with a @ handshape. By using a W, X, t, or < handshape, the number of units is expressed. That is, signing the sign for day with a W handshape means ‘two days’. A X handshape would mean ‘three days’, etc. This incorporation of number in the signs
101
102
II. Morphology is limited in Israeli SL to five in signs with one active hand, and to 10 in symmetrical two-handed signs. Number signs in many sign languages have specifications only for handshape, and are therefore good candidates for participating in such simultaneous compounding (but see Liddell 1996 for a different analysis). But there are also restrictions on the base sign, which provides the movement and location specifications: usually it has to have a @ handshape, which can be taken to represent the number one. However, some counter-examples to this generalization do exist. In DGS, the sign year has a d handshape, but this handshape is replaced by the above handshapes to express ‘one/two/three etc. years’. Numeral incorporation has been reported on in many sign languages, e.g., ASL, BSL, Israeli SL, DGS, Auslan, and IPSL, among others. But there are sign languages that do not use this device. In ABSL numeral incorporation has not been attested, maybe because time concept signs in the language do not have a @ handshape (for numeral incorporation see also chapters 6 and 11). Initialization is another type of simultaneous combination of phonological specifications from two different sources: a spoken language word and a sign language word. The handshape of an initialized sign represents a letter of the fingerspelled alphabet, corresponding to the first letter of the written form of an ambient spoken language word. This initialized handshape is usually added to a sign that already exists in the language, lending it an additional ⫺ often more specific ⫺ meaning for which there is no other sign. For example, the ASL signs family, association, team, and department all share the movement and location of the sign group, and are distinguished by the handshapes F, A, T, or D. As Brentari and Padden (2001, 104) point out, some initialized signs in ASL are not built on native signs, but they still form a semantic and a formational ‘family’. Color terms, such as blue, purple, yellow, and green, are characterized by the same movement and location, although there is no general color sign on which they are based. The same holds for color terms and kinship terms in LSQ (Machabee 1995, 29⫺61, 47). In other cases, the movement and location may present iconically some feature of the concept. In LSQ, the sign for ‘Roman’ is performed with an R handshape tracing the form of a Roman military helmet above the head (Machabee 1995, 45). Initialization is found in other sign languages as well, e.g., Irish Sign Language (Ó’Baoill/Matthews 2002) and Israeli SL (Meir/Sandler 2008, 52). However, it is much less common in languages with a two-handed fingerspelling system, such as BSL, Auslan, and New Zealand Sign Language. In a one-handed fingerspelling system, each letter is represented solely by the handshape, which may then be easily incorporated in other signs, taking their location and movement features. In a two-handed system, each letter is identified by a combination and location (and sometime movement as well), so that it is much less free to combine with other phonological parameters (Cormier/Schembri/Tyrone 2008). More common in these languages are single manual letter signs, which are based on a letter of an English word, but with very limited types of movement of the dominant hand against the non-dominant hand.
5.2. Affixation Though compounding is common in all studied sign languages, sequential affixation is very rare. This is partly due to the general preference in manual-visual languages for
5. Word classes and word formation more simultaneous structures. However, since compounds are not uncommon, simultaneity cannot be the sole factor for disfavoring sequential affixation. Another explanation, suggested by Aronoff, Meir and Sandler (2005), is the relatively young age of sign languages. Sequential derivational affixes in spoken languages are in many cases the result of grammaticalization of free words. Grammaticalization is a complex set of diachronic changes (among them reanalysis, extension, phonological erosion, and semantic bleaching) that take time to crystallize. Sign languages as a class are too young for such structures to be abundant (but see chapter 36). In addition, it might be the case that there are more affixal structures in sign languages that haven’t been identified yet, because of the young age of the sign linguistic field. How can one identify affixes in a language? What distinguishes them from compound members? First, an affix recurs in the language, co-occurring with many different base words, while compound members are confined to few bases. The suffix -ness, for example, is listed as occurring in 3058 English words (Aronoff/Anshen 1998, 245), while green (as in greenhouse, greengrocer, greenmail) occurs in about 30. In addition, affixes are more distant from their free word origin. While members of compounds usually also occur as free words in the language, affixes in many cases do not. Therefore, a morpheme that recurs in many lexical items in a language and in addition does not appear as a free form is an affix and not a compound member. Finally, allomorphy is much more typical of affixes than of compound members. This is to be expected, since affixes are more fused with their bases than compound members with each other. However, the difference between an affix and a compound member is a matter of degree, not a categorical difference, and can be hard to determine in particular cases.
5.2.1. Sequential affixation in sign languages Very few sequential affixes have been mentioned in the sign language literature. As they are so rare, those affixes that were found were assumed to have evolved under the influence of the ambient spoken language. In ASL the comparative and superlative affixes (Sandler/Lillo-Martin 2006, 64) and the agentive suffix were regarded as English loan translations. However, recently Supalla (1998) argued, on the basis of old ASL films, that the agentive suffix evolved from an old form of the sign ‘person’ in ASL. For three affixes it has been explicitly argued that the forms are indeed affixes and not free words or members of a compound: two negative suffixes, one in ASL and the other in Israeli SL, and a set of ‘sense’ prefixes in Israeli SL. All of these affixes have free form counterparts that are nonetheless significantly different from the bound forms, so as to justify an affixational analysis. The affinity between the bound and the free forms may indicate how these affixes evolved. The suffix glossed as zero in ASL has the meaning ‘not at all’, and apparently evolved from a free sign with a similar meaning (Sandler 1996; Aronoff/Meir/Sandler 2005). However, the suffix and the base it attaches to behave like a single lexical unit: they cannot be interrupted by another element, and for some signers they are fused phonologically. As is often the case, some combinations of wordCzero have an idiosyncratic meaning, e.g., touchCzero ‘didn’t use it at all’, and there are some arbitrary gaps in the lexical items it attaches to. What makes it more affix-like than compoundlike is its productivity: it attaches quite productively to verbs and (for some signers) to
103
104
II. Morphology adjectives. Yet its distribution and productivity vary greatly across signers, indicating that it has not been fully grammaticized. The Israeli SL negative suffix, mentioned in section 4.4.3, was apparently grammaticized from a negative word meaning meaning ‘none’ or ‘not exist’. In addition to other characteristics typical of affixes, it also has two allomorphes: a one-handed and a twohanded variant, the distribution of which is determined by the number of hands of the base. Another class of affixes is the ‘sense’ prefix described above. Similar forms have been reported in other sign languages, e.g., BSL (Brennan 1990), where they are treated as compounds. Indeed, such forms show that sometimes the distinction between compounds and affixed words is blurred. The reason that Aronoff, Meir and Sandler (2005) analyze these forms as affixes is their productivity. There are more than 70 such forms in Israeli SL, and signers often use these forms to create new concepts. In addition, signers have no clear intuition of the lexical class of prefixes; they are not sure whether pointing to the eye sign should be translated as ‘see’ or ‘eye’, or pointing to the nose ‘smell’ or ‘nose’ etc. Such indeterminacy is characteristic of affixes, but not of words. The fact that these forms are regarded as compounds in other languages may be due to lesser degree of productivity in other languages (for example, they are less prevalent in ASL), or to the fact that other researchers did not consider an affix analysis. However, their recurrence in many sign languages indicates that these signs are productive sources for word formation. Two potential suffixes exist in ABSL. They were mentioned in section 4.4.3: the locative pointing signs, and the size and shape signs. At present, it is hard to determine whether these are affixed words or compounds, since not much is known about the structure of lexical items in ABSL. However, since these signs recur in a number of complex signs, they have the potential of becoming suffixes in the language.
5.2.2. Simultaneous affixation in sign language The term ‘simultaneous affixation’ may seem to be contradictory, since affixation is usually conceived as linear. However, by now it should be clear that morphological information may be added not by adding segments, but rather by changing features of segments. Therefore, all the processes described above in which morphological categories are encoded by a change in the movement parameters of the base sign may be regarded as instances of simultaneous affixation. All inflectional processes identified in the sign language literature to date make use of this formal device, and a few were described in section 4.4.2 above. But sign languages use this device for derivational purposes as well, as exemplified by the nounverb pairs in section 4.4.1. Quite a few of the derivational processes involve reduplication, to which we turn in the next section. Here we mention derivational processes that involve changes to the movement component with no reduplication. ASL has the means for deriving predicates from nouns. Klima and Bellugi (1979, 296) describe a systematic change to the movement of ASL nouns, forming predicates with the meaning of ‘to act/appear like X’, as in ‘to act like a baby’ from baby, ‘to seem Chinese’ from chinese, and ‘pious’ from church. The derived predicates have a fast and tense movement with restrained onset.
5. Word classes and word formation Klima and Bellugi also point out that the figurative or metaphorical use of signs often involves a slight change in the movement of the base sign. A form meaning ‘horny’ differs slightly in movement from hungry; ‘to have a hunch’ differs from feel. Similarly, differences in movement encode an extended use of signs as sentential adverbials, as in ‘suddenly’ or ‘unexpectedly’ from wrong, or ‘unfortunately’ from trouble. Yet in these cases both form and meaning relations are idiosyncratic, and appear only in particular pairs of words. These pairs show that movement is a very productive tool for indicating relationships among lexical items. But not all instances of movement difference are systematic enough to be analyzed as derivational. Not only may the quality of the movement change, but also its direction. In signs denoting time concepts in a few sign languages, the direction of movement indicates moving forward or backwards in time. The signs tomorrow and yesterday in Israeli SL form a minimal pair. They have the same hand configuration and location, but differ in the direction of movement. In yesterday the movement is backwards, and in tomorrow it is forwards. Similarly, if a forward or backward movement is imposed on the signs week and year, the derived meanings will be ‘next week/year’ and ‘last week/ year’. This process is of very limited productivity. It is restricted to words denoting time concepts, and may be further restricted by the phonological form of the base sign. Furthermore, the status of the direction of movement in these signs is not clear. It is not a morpheme, yet it is a phoneme that is meaning-bearing (see the discussion of sign families in section 2.2). Nonetheless, within its restricted semantic field, it is quite noticeable.
5.3. Reduplication Reduplication is a process by which some phonological segment, or segments, of the base is repeated. Yet what is repeated may vary. It could be the entire base, as in Walbiri kurdu-kurdu (‘children’, from kurdu ‘child’; in Nash 1986); a morpheme; a syllable; or any combination of segments, such as the first CVC segment of the base, as in Agta tak-takki (‘legs’, from takki ‘leg’, Marantz 1982). Function-wise, reduplication lends itself very easily to iconic interpretation. Repeating an element creates a string with several identical elements. When a whole base is repeated, the interpretation seems quite obvious. Lakoff and Johnson (1980, 180) refer to this as the principle of “more of form stands for more of content”. The most straightforward iconic uses of reduplication are plurality and distribution for nouns (see chapter 6, Plurality); repetition, duration, and habitual activity in verbs (see chapter 9, Tense, Aspect, and Modality); and increase in the size and/or intensity of adjectives. However, it is also used in many various non-iconic or less motivated functions, such as to form infinitives, verbal adjectives, causatives, various aspects, and modalities (Kouwenberg 2003). The sign modality affords several possibilities of reduplication, some of which do not have counterparts in spoken languages (see Pfau/Steinbach 2006). It may involve several iterations of a sign. These iterations may be produced in the same place, or may be displaced in the signing space. Iterations may be performed by one hand or both hands. If the latter, the two hands may move symmetrically or in an alternating fashion. A circular movement may be added to the iterations, in various rhythmic
105
106
II. Morphology patterns. Consequently, some phonological features of the base sign may be altered. Non-manual features may be iterated as well, or a feature may spread over the entire set of manual iterations. Finally, reduplication may also take a simultaneous form: one sign can be articulated simultaneously by both hands. Sign languages certainly make extensive use of reduplication. As the forms may vary, so can the functions. Reduplication is very common in verbal and adjectival aspectual inflections. Of the 11 adjectival modulations in Klima and Bellugi (1979), seven involve reduplication; 10 of the 12 aspectual modulations exemplified by look-at and give also involve reduplication. It is also very commonly used to indicate plurality on nouns (see Sutton-Spence/Woll 1999, 106 for BSL; Pizzuto/Corazza 1996 for Italian Sign Language (LIS); Pfau/Steinbach 2006 for DGS as well as LIS and BSL). These inflectional processes are discussed in the relevant chapters in this volume. Reduplication is also used in a few derivational processes. Frishberg and Gough (1973, cited in Wilbur 1979, 81) point out that repetitions of signs denoting time units in ASL, e.g., week, month, tomorrow, derive adverbs meaning weekly, monthly, every-day. Slow repetition with wide circular path indicates duration ‘for weeks and weeks’. Activity nouns in ASL are derived from verbs by imposing small, quick, and stiff repeated movements on non-stative verbs (Klima/Bellugi 1979, 297; Padden/Perlmutter 1987, 343). The verb act has three unidirectional movements, while the noun acting is produced with several small, quick, and stiff movements. In noun-verb pairs (discussed above) in ASL and Auslan, reduplicated movement (in addition to the quality of the movement) distinguishes between nouns and verbs. Other derivational processes do not change the category of the base word, but create a new (although related) lexical item. It should be noticed that in such cases it is often difficult to determine whether the process is inflectional or derivational. For example, the two adjectival processes described here are referred to as inflections in Klima and Bellugi (1979) and as derivation in Padden and Perlmutter (1987). Characteristic adjectives are derived from ASL signs denoting incidental or temporary states, such as quiet, mischievous, rough, silly, by imposing circular reduplicated movement on the base sign. Also in ASL, repeated tense movements derive adjectives with the meaning of ‘-ish’: youngish, oldish, blueish (Bellugi 1980). In Israeli SL verbs denoting a reciprocal action are derived by imposing alternating movement on some verbs, e.g., say ⫺ conduct conversation; speak ⫺ converse; answer ⫺ ‘conduct a dialogue of questions and answers’ (Meir/Sandler 2008). Simultaneous reduplication, that is, the articulation of a sign by both hands instead of by only one hand is very rare as a word formation device. Johnston and Schembri (1999, 161⫺163) point out that in Auslan producing a two-handed version of a onehanded sign (which they term ‘doubling’) very rarely results in a different yet related lexical item. Usually the added meaning is that of intensification, e.g., bad vs. verybad/apalling/horror, or success vs. successful/victorious, but often such intensified forms are also characterized by specific facial expression and manual stress. Most instances of doubling in Auslan are either free variants of the single-handed version, or mark grammatical distinctions such as distributive aspect on verbs. Therefore they conclude that in most cases doubled forms do not constitute separate lexical items in the language.
5. Word classes and word formation
5.4. Conclusion Sign languages make use of word formation operations that are also found in spoken languages, but endow them with flavors that are available only to manual-spatial languages: the existence of two major articulators, and their ability to move in various spatial and temporal patterns. There is a strong preference for simultaneous operations, especially in affixation. Inflection is, in fact, exclusively encoded by simultaneous affixation, while derivation is more varied in the means it exploits. Both inflection and derivation make use of modulations to the movement component of the base sign. In other words, sign languages make extensive use of one phonological parameter for grammatical purposes. Although signs in sign families (described in section 1.2) can share any formational element, systematic relations between forms are encoded by movement. Why is it that the movement is singled out for performing these grammatical tasks and not the other parameters of the sign ⫺ the hand configuration or the location? Using a gating task, Emmorey and Corina (1990) investigated how native ASL signers use phonetic information for sign recognition. The results indicated the location of the sign was identified first, followed quickly by the handshape, and the movement was identified last. These data may suggest that the movement is in a sense ‘extra’: it adds little to the lexical identity of the sign. But it can be used to add shades of meaning. Moreover, movement is inherently both spatial and temporal. Many inflectional categories encode temporal and spatial concepts, and therefore movement is the most obvious formational parameter to express these notions in a transparent way. Yet the use of movement in derivational processes shows that iconicity is not the entire story. It might be the case that once a formational element is introduced into the language for whatever reason, it may then expand and be exploited as a grammatical device for various functions. The use of movement also has an interesting parallel in spoken languages, in that non-sequential morphology often makes use of the vowels of the base word, and not the consonants. Furthermore, it has been argued that vowels carry out more grammatical roles in spoken languages (both syntactic and morphological) while consonants carry more of the lexical load (Nespor/Peña/Mehler 2003). Both movement and vowels are the sonorous formational elements; both are durational and less discrete. However, what makes them key elements in carrying the grammatical load (as opposed to the lexical load) of the lexeme still remains an open issue. The ubiquity of compounds shows that sequential operations are not utterly disfavored in sign languages. Signed compounds share many properties with their spoken language counterparts, including the tendency to lexicalize and become more similar in form to simplex signs. Compounding may also give rise to the development of grammatical devices such as affixes. Elements that recur in compounds are good candidates for becoming affixes, but such developments take time, and are therefore quite sparse in young languages, including sign languages (Aronoff/Meir/Sandler 2005). Because of their youth, sign languages actually offer us a glimpse into such diachronic processes in real time.
107
108
II. Morphology
6. Literature Anderson, D. E./Reilly, Judy 1998 PAH! The Acquisition of Adverbials in ASL. In: Sign Language & Linguistics 1(2), 3⫺28. Aronoff, Mark 1976 Word Formation in Generative Grammar. Cambridge, MA: MIT Press. Aronoff, Mark/Anshen, Frank 1998 Morphology and the Lexicon: Lexicalization and Productivity. In: Spencer, Andrew/ Zwicky, Arnold M.(eds.), The Handbook of Morphology. Oxford: Blackwell, 237⫺247. Aronoff, Mark/Meir, Irit/Sandler, Wendy 2005 The Paradox of Sign Language Morphology. In: Language 81, 301⫺344. Aronoff, Mark/Meir, Irit/Padden, Carol A./Sandler, Wendy 2008 The Roots of Linguistic Organization in a New Language. In: Interaction Studies: A Special Issue on Holophrasis vs. Compositionality in the Emergence of Protolanguage 9(1), 131⫺150. Baker, Charlotte/Cokely, Dennis 1980 American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Silver Spring, MD: TJ Publishers. Bellugi, Ursula 1980 How Signs Express Complex Meanings. In: Baker, Charlotte/Battison, Robbin (eds.), Sign Language and the Deaf Community. Silver Spring, MD: National Association of the Deaf, 53⫺74. Boyes Braem, Penny 1986 Two Aspects of Psycholinguisitc Research: Iconicity and Temporal Structure. In: Tervoort, B. T. (ed.), Signs of Life: Proceedings of the Second European Congress on Sign Language Research. Amsterdam: Publication of the Institute for General Linguistics, University of Amsterdam 50, 65⫺74. Brennan, Mary 1990 Word Formation in British Sign Language. Stockholm: The University of Stockholm. Brentari, Diane/Padden, Carol A. 2001 Native and Foreign Vocabulary in American Sign Language: A Lexicon with Multiple Origins. In: Brentari, Diane (ed.), Foreign Vocabulary in Sign Languages: A Crosslinguistic Investigation of Word Formation. Mahwah, NJ: Lawrence Erlbaum, 87⫺119. Cohen, Einya/Namir, Lila/Schlesinger, I. M. 1977 A New Dictionary of Sign Language: Employing the Eshkol-Wachmann Movement Notation System. The Hague: Mouton. Cormier, Kearsy/Schembri, Adam/Tyrone, Martha E. 2008 One Hand or Two? Nativisation of Fingerspelling in ASL and BANZSL. In: Sign Language & Linguistics 11(1), 3⫺44. Emmorey, Karen 2002 Language, Cognition, and the Brain: Insights from Sign Language Research. Mahwah, NJ: Lawrence Erlbaum. Engberg-Pedersen, Elisabeth 1993 Space in Danish Sign Language. Hamburg: Signum. Fabb, Nigel 1998 Compounding. In: Spencer, Andrew/Zwicky, Arnold M. (eds.), The Handbook of Morphology. Oxford: Blackwell, 66⫺83. Fernald, Theodore B./Napoli, Donna Jo 2000 Exploitation of Morphological Possibilities in Signed Languages. In: Sign Language & Linguistics 3(1), 3⫺58.
5. Word classes and word formation Fischer, Renate 2002 The Study of Natural Sign Language in Eighteenth-century France. In: Sign Language Studies 2, 391⫺406. Frishberg, Nancy/Gough, Bonnie 1973 Morphology in American Sign Language. In: Salk Institute Working Paper. Fuentes, Mariana/Tolchinsky, Liliana 2004 The Subsystem of Numerals in Catalan Sign Language. In: Sign Language Studies 5(1), 94⫺117. Haspelmath, Martin 2001 Word Classes and Parts of Speech. In: Baltes, Paul B./Smelser, Neil J. (eds.), International Encyclopedia of the Social & Behavioral Sciences Vol. 24. Amsterdam: Pergamon, 16538⫺16545. Hockett, C. F. 1960 The Origins of Speech. In: Scientific American 203, 89⫺96. Hunger, Barbara 2006 Noun/Verb Pairs in Austrian Sign Language (ÖGS). In: Sign Language & Linguistics 9(1/2), 71⫺94. Johnston, Trevor/Schembri, Adam 1999 On Defining Lexeme in a Signed Language. In: Sign Language & Linguistics 2(2), 115⫺185. Johnston, Trevor 2001 Nouns and Verbs in Australian Sign Language: An Open and Shut Case? In: Journal of Deaf Studies and Deaf Education 6(4), 235⫺257. Kennedy, Graeme (ed.). 2002 A Concise Dictionary of New Zealand Sign Language, Wellington New Zealand: Bridget Williams Books. Kimmelman, Vadim 2009 Parts of Speech in Russian Sign Language: The Role of Iconicity and Economy. In Sign Language & Linguistics 12(2), 161⫺186. Klima, Edward S./Bellugi, Ursula 1979 The Signs of Language. Cambridge, MA: Harvard University Press. Kouwenberg, Silvia 2003 Twice as Meaningful: Reduplication in Pidgins, Creoles and other Contact Languages. London: Battlebridge Publications. Kyle, Jim G./Woll, Benice 1985 Sign Language: The Study of Deaf People and their Language. Cambridge: Cambridge University Press. Lakoff, George/Johnson, Mark 1980 Metaphors we Live by. Chicago: University of Chicago Press. Liddell, Scott K. 1980 American Sign Language Syntax. The Hague: Mouton. Liddell, Scott K. 1996 Numeral Incorporating Roots & Non-incorporating Roots in American Sign Language. In: Sign Language Studies 92, 201⫺225. Machabee, Dominique 1995 Signs in Quebec Sign Language. In: Sociolinguistics in Deaf Communities 1, 29⫺61. Marantz, Alec 1982 Re Reduplication. In: Linguistic Inquiry 13(3), 435⫺482. McWhorter, John 1998 Identifying the Creole Prototype: Vindicating a Typological Class. In: Language 74, 788⫺818.
109
110
II. Morphology Meier, Richard P. 1982 Icons, Analogues, and Morphemes: The Acquisition of Verb Agreement in ASL. PhD Dissertation, University of California, San Diego. Meir, Irit 2003 Grammaticalization and Modality: The Emergence of a Case-marked Pronoun in Israeli Sign Language. In: Journal of Linguistics 39(1), 109⫺140. Meir, Irit 2004 Question and Negation in Israeli Sign Language. In: Sign Language & Linguistics 7(2), 97⫺124. Meir, Irit/Sandler, Wendy 2008 A Language in Space: The Story of Israeli Sign Language. New York: Lawrence Erlbaum. Meir, Irit/Aronoff, Mark/Sandler, Wendy/Padden, Carol 2010 Sign Languages and Compounding. In: Scalise, Sergio/Vogel Irene (eds), Compounding. Benjamins, 301⫺322. Nash, David G. 1986 Topics in Warlpiri Grammar. New York: Garland. Nespor, Marina/Sandler, Wendy 1999 Prosody in Israeli Sign Language. In: Language and Speech 42, 143⫺176. Ó’Baoill, Donall/Matthews, Pat A. 2002 The Irish Deaf Community. Vol. 2: The Structure of Irish Sign Language. Dublin: The Linguistics Institute of Ireland. Padden, Carol 1988 Interaction of Morphology and Syntax in American Sign Language. New York: Garland. Padden, Carol/Perlmutter, David 1987 American Sign Language and the Architecture of Phonological Theory. In: Natural Language and Linguistic Theory 5, 335⫺375. Pfau, Roland/Steinbach, Markus 2006 Pluralization in Sign and in Speech: A Cross-Modal Typological Study. In: Linguistic Typology 10, 135⫺182. Pizzuto, Elena/Corazza, Serena 1996 Noun Morphology in Italian Sign Language. In: Lingua 98, 169⫺196. Romaine, Suzanne 1989 The Evolution of Linguistic Complexity in Pidgin and Creaole Languages. In: Hawkins, John A./Gell-Mann, Murray (eds.), The Evolution of Human Languages. Santa Fe Institute: Addison-Wesley Publishing Company, 213⫺238. Sandler, Wendy 1989 Phonological Representation of the Sign: Linearity and Nonlinearity in American Sign Language. Dordrecht: Foris. Sandler, Wendy 1993 Linearization of Phonological Tiers in American Sign Language. In: Coulter, Geoffrey R. (ed.), Phonetics and Phonology. Vol. 3. Current Issues in ASL Phonology. San Diego, CA: Academic Press, 103⫺129. Sandler, Wendy 1996 A Negative Suffix in ASL. Paper Presented at the 5th Conference on Theoretical Issues in Sign Language Research (TISLR), Montreal, Canada. Sandler, Wendy 1999a Cliticization and Prosodic Words in a Sign Language. In: Kleinhenz, Ursula/Hall, Tracy (eds.), Studies on the Phonological Word. Amsterdam: Benjamins, 223⫺254. Sandler, Wendy 1999b The Medium and the Message: Prosodic Interpretation of Linguistic Content in Israeli Sign Language. In: Sign Language & Linguistics 2 (2), 187⫺215.
5. Word classes and word formation Sandler, Wendy/Aronoff, Mark/Meir, Irit/Padden, Carol 2011 The Gradual Emergence of Phonological Form in a New Language. In Natural Language and Linguistic Theory, 29, 503⫺543. Schwager, Waldemar/Zeshan, Ulrike 2008 Word Classes in Sign Languages: Criteria and Classification. In: Studies in Language 32(3), 509⫺45. Senghas, Ann 1995 Children’s Contribution to the Birth of Nicaraguan Sign Language. PhD Dissertation, MIT. Slobin, Dan 2008 Breaking the Molds: Signed Languages and the Nature of Human Language. In: Sign Language Studies 8(2), 114⫺130. Stokoe, W. C. 1960 Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf. Studies in Linguistics Occasional Papers 8. Buffalo: University of Buffalo Press. Supalla, Ted/Newport, Elissa 1978 How Many Seats in a Chair? The Derivation of Nouns and Verbs in American Sign Language. In: Siple, Patricia (ed.), Understanding Language through Sign Language Research. New York: Academic Press, 91⫺132. Supalla, Ted 1998 Reconstructing early ASL Grammar through Historic Films. Paper Presented at the 6th International Conference on Theoretical Issues in Sign Language Linguistics (TISLR), Gallaudet University, Washington, D.C. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language. Cambridge: Cambridge University Press. Taub, Sarah F. 2001 Language from the Body: Iconicity and Metaphor in American Sign Language. Cambridge: Cambridge University Press. Tobin, Yishai 2008 Looking at Sign Language as a Visual and Gestural Shorthand. In: Poznań Studies in Contemporary Linguistics 44(1), 103⫺119. Wilbur, Ronnie B. 1979 American Sign Language and Sign Systems: Research and Application. Baltimore: University Park Press. Wilbur, Ronnie B. 2000 Phonological and Prosodic Layering of Nonmanuals in American Sign Language. In: Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revisited. Mahwah, NJ: Lawrence Erlbaum Associates, 215⫺244. Zeshan, Ulrike 2000 Sign Language in Indo-Pakistan: A Description of a Signed Language. Amsterdam: Benjamins. Zeshan, Ulrike 2002 Towards a Notion of ‘Word’ in Sign Languages. In: Dixon, Robert M. W./Aikhenvald, Alexandra Y. (eds.), Word: A Cross-linguistic Typology. Cambridge: Cambridge University Press, 153⫺179. Zeshan, Ulrike 2004a Interrogative Constructions in Signed Languages: Crosslinguistic Perspectives. In: Language 80(1), 7⫺39. Zeshan, Ulrike 2004b Hand, Head, and Face: Negative Constructions in Sign Languages. In: Linguistic Typology 8(1), 1⫺58.
111
112
II. Morphology Zwicky, Arnold M./Pullum, Geoffrey K. 1983 Cliticization vs. Inflection: English N’T. In: Language 59, 502⫺513.
Irit Meir, Haifa (Israel)
6. Plurality 1. 2. 3. 4. 5. 6. 7.
Introduction Nouns and noun phrases Pronouns, numeral incorporation, and number signs Verb agreement and classifier verbs Pluralization across modalities Conclusion Literature
Abstract Both sign and spoken languages make use of a variety of plural marking strategies. The choice of strategy depends on lexical, phonological, and morphosyntactic properties of the sign to be modified. The description of basic plural patterns is supplemented by a typological investigation of plural marking across sign languages. In addition, we discuss the realization of the plural feature within noun phrases, the expression of plural with pronouns as well as with agreement and classifier verbs, and the structure of number systems in sign languages. Finally, we compare pluralization in spoken languages to the patterns found in sign languages and account for the modality-specific properties of plural formation in sign languages.
1. Introduction The topic of this chapter is pluralization in sign language. All natural languages seem to have means to distinguish a single entity (singular) from a number of entities (plural). This distinction is expressed by a difference in the grammatical category number. Typically, the singular is the unmarked form, whereas the plural is the marked form, which is derived from the singular by specific morphological operations such as affixation, stem internal change, or reduplication. Plural can be expressed on nouns, pronouns, demonstratives, determiners, verbs, adjectives, and even prepositions. In this chapter, we will be mainly concerned with singular and plural forms although many languages have more fine-grained distinctions such as, for example, singular, dual, and plural (but see sections 3 and 4 that show that sign languages also allow for more finegrained distinctions).
6. Plurality Patterns of plural marking have been described for a number of different sign languages: see Jones and Mohr (1975), Wilbur (1987), Valli and Lucas (1992), and Perry (2004) for American Sign Language (ASL, also see chapters 7, 11, and 13); Skant et al. (2002) for Austrian Sign Language (ÖGS); Sutton-Spence and Woll (1999) for British Sign Language (BSL, also see chapter 11); Perniss (2001) and Pfau and Steinbach (2005b, 2006b) for German Sign Language (DGS); Heyerick and van Braeckevelt (2008) and Heyerick et al. (2009) for Flemish Sign Language (VGT); Schmaling (2000) for Hausa Sign Language (Hausa SL); Zeshan (2000) for Indopakistani Sign Language (IPSL); Stavans (1996) for Israeli Sign Language (Israeli SL); Pizzuto and Corazza (1996) for Italian Sign Language (LIS); Nijhof and Zwitserlood (1999) for Sign Language of the Netherlands (NGT); and Kubuş (2008) and Zwitserlood, Perniss, and Özyürek (2011) for Turkish Sign Language (TİD). Although there are many (brief) descriptions of plural marking in individual sign languages (but only a few theoretical analyses), a comprehensive (cross-modal) typological study on pluralization in the visual-manual modality is still lacking. Parts of this chapter build on Pfau and Steinbach (2005b, 2006b), who provide a comprehensive overview of plural marking in DGS and discuss typological variation and modality-specific and modality-independent aspects of pluralization in sign languages. In section 2, we start our investigation with the nominal domain and discuss plural marking on nouns and noun phrases. We first describe the basic patterns of plural marking, which are attested in many different sign languages, namely (two kinds of) reduplication and zero marking. Then we discuss typological differences between sign languages. In section 3, we address pronouns, number signs, and numeral incorporation. Section 4 turns to the verbal domain and describes plural marking on agreement and classifier verbs. Section 5 gives a brief typological survey of typical patterns of plural formation in spoken languages and discusses similarities and differences between spoken and sign languages. We also try to account for the modality-specific properties of pluralization in sign languages described in the previous sections. Finally, the main findings of this chapter are summarized in section 6.
2. Nouns and noun phrases Descriptions of pluralization in many different sign languages show that within a single sign language, various plural marking strategies may exist. On the one hand, certain strategies such as reduplication and the use of numerals and quantifiers are attested in most sign languages. On the other hand, sign languages differ from each other to a certain degree with respect to the morphological realization of plural features. Firstly in this section, we discuss the realization of the plural feature on the noun (section 2.1). Then, we turn to pluralization and plural agreement within noun phrases (section 2.2). We illustrate the basic patterns with examples from DGS but also include examples from other sign languages to illustrate typological variation.
2.1. Nominal number inflection Two general patterns of nominal plural formation that are mentioned frequently in the literature are zero marking and reduplication (or, to be more precise, triplication, see
113
114
II. Morphology below) of the noun. Reduplication typically comes in two types: (i) simple reduplication and (ii) sideward reduplication. Interestingly, both kinds of reduplication only apply to certain kinds of nouns. We will see that the choice of strategy depends on phonological features of the underlying noun (for phonological features, cf. chapter 3, Phonology). Hence, we are dealing with phonologically triggered allomorphy and the pluralization patterns in sign languages can be compared to phonologically constrained plural allomorphy found in many spoken languages. We come back to this issue in section 5.
2.1.1. Phonological features and plural marking strategies In (1), we provide an overview of the phonological features constraining nominal plural marking in DGS (and many other sign languages) and the corresponding plural marking strategies (cf. Pfau/Steinbach 2005b, 2006b). As illustrated in (1), in DGS plural marking, some of these features depend on others. The distinction between complex and simple movement, for instance, is only relevant for non-body anchored nouns. Moreover, the distinction between lateral and midsagittal place of articulation applies only to non-body anchored nouns performed with a simple movement. Consequently, we arrive at four different classes (1a⫺d) and potentially four different patterns of plural marking. However, since all nouns phonologically specified for either complex movement or body anchored use the same pattern (zero marking) and reduplication comes in two types, we have basically two strategies of plural marking all together: (i) (two kinds of) reduplication and (ii) zero marking. (1)
phonological feature a. body anchored non-body anchored b. (i) complex movement (ii) simple movement c. (iia) midsagittal place of articulation d. (iib) lateral place of articulation
plural marking strategy zero marking zero marking simple reduplication sideward reduplication
It will become clear in the examples below that plural reduplication usually involves two repetitions. Moreover, various articulatory factors may influence the number of repetitions: (i) the effort of production (more complex signs like, e.g., vase tend to be repeated only once), (ii) the speed of articulation, and (iii) the syllable structure of a mouthing that co-occurs with a sign since the manual and the non-manual part tend to be synchronized (cf. Nijhof/Zwitserlood 1999; Pfau/Steinbach 2006b). In addition, the prosodic structure may influence the number of repetitions, which seems to increase in prosodically prominent positions, for instance, at the end of a prosodic domain or in a position marked as focus (Sandler 1999; cf. also chapter 13 on noun phrases). Finally, we find some individual (and probably stylistic) variation among signers with respect to the number of repetitions. While some signers repeat the base noun twice, others may either repeat it only once or three times. Although additional repetitions may emphasize certain aspects of meaning, we assume that the distinction between reduplication and triplication is not part of the morphosyntax of plural marking proper. Because two repetitions (i.e. triplication) appears to be the most common
6. Plurality pattern, the following discussion of the data is based on this pattern. To simplify matters, we will use the established term ‘reduplication’ to describe this specific morphological operation of plural marking in sign languages. We will address the difference between reduplication and triplication in more detail in section 5 below. Let us first have a closer look at the four classes listed in (1).
2.1.2. Zero marking In DGS, body anchored nouns (1a) pattern with non-body anchored nouns which are lexically specified for a complex movement (1b) in that both types do not permit the overt morphological realization of the plural feature. In both cases, zero marking is the only grammatical option. As can be seen in Figures 6.1 and 6.2 above, simple as well as sideward reduplication leads to ungrammaticality with these nouns. Note that in the glosses, plural reduplication is indicated by ‘CC’, whereby every ‘C’ represents one repetition of the base form. Hence the ungrammatical form womanCC in Figure 6.1b would be performed three times in total. ‘>’ indicates a sideward movement, that is, the combination of both symbols ‘>C>C’ stands for sideward plural reduplication. The direction of sideward movement depends on the handedness of the signer. Obviously, in DGS, phonological features may block overt plural marking. Both kinds of plural reduplication are incompatible with the inherent place of articulation feature body anchored and the complex movement features repeat, circle, and alternat-
Fig. 6.1: Plural marking with the body anchored noun woman in DGS. Copyright © 2005 by Buske Verlag. Reprinted with permission.
Fig. 6.2: Plural marking with the complex movement noun bike in DGS. Copyright © 2005 by Buske Verlag. Reprinted with permission.
115
116
II. Morphology ing. Like many other morphological processes in sign languages, such as agreement (cf. chapter 7) or reciprocal marking (Pfau/Steinbach 2003), plural marking is also constrained by phonological features of the underlying sign. We come back to the influence of phonology in section 5. Interestingly, the features that block plural reduplication do not block similar kinds of reduplication in aspectual and reciprocal marking. Hence, it appears that certain phonological features only constrain specific morphological processes (Pfau/Steinbach 2006b).
2.1.3. Reduplication So far, we have seen that reduplication is not an option for DGS nouns that are body anchored or involve complex movement. By contrast, non-body anchored midsagittal and lateral nouns permit reduplication. Figures 6.3 and 6.4 illustrate that for symmetrical midsagittal nouns such as book, the plural form is marked by simple reduplication of the whole sign, whereas the crucial morphological modification of non-body anchored lateral nouns such as child is sideward reduplication. Sideward reduplication is a clear example of partial reduplication since the reduplicant(s) are performed with a shorter movement. The case of simple reduplication is not as clear. Typically, the reduplicant(s) are performed with the same movement as the base; in this case, simple reduplication would be an example of complete reduplication. Occasionally, however,
Fig. 6.3: Plural marking with the midsagittal noun book in DGS. Copyright © 2005 by Buske Verlag. Reprinted with permission.
Fig. 6.4: Plural marking with the lateral noun child in DGS. Copyright © 2005 by Buske Verlag. Reprinted with permission.
6. Plurality
117
the reduplicant(s) are performed with a reduced movement and thus, we are dealing with partial reduplication. Note that body-anchored nouns denoting human beings have an alternative way of plural marking that involves reduplication. The plural form of nouns like woman, man, or doctor can be formed by means of the noun person. Since person is a one-handed lateral sign, its plural form in (2) involves sideward reduplication. Syntactically, person is inserted right-adjacent to the noun. Semantically, person is simply an alternative plural marker for a specific class of nouns without additional meaning. (2)
woman person>C>C ‘women’
[DGS]
2.1.4. Typological variation The basic strategies described for DGS are also found in many other sign languages (see the references listed at the beginning of this chapter). Typologically, reduplication and zero marking seem to be the basic strategies of plural marking across sign languages. Likewise, the constraints on plural formation are very similar to the ones described for DGS. In BSL, for example, pluralization also involves reduplication and sideward movement. According to Sutton-Spence and Woll (1999), the plural form of some nouns is marked by a ‘distributive bound plural morpheme’, which triggers two repetitions (i.e. triplication) of the underlying noun. Both repetitions are performed in different locations. Like sideward reduplication in DGS, sideward reduplication in BSL is only possible with non-body anchored nouns and signs without inherent complex movement. The plural of body anchored nouns and nouns with complex movement is marked without any reduplication, i.e. the only remaining option for these nouns is zero marking. Likewise, Pizzuto and Corazza (1996) describe pluralization patterns for LIS, which are very similar to those described for DGS and BSL. Again, reduplication is the basic means of plural formation. Pizzuto and Corazza also distinguish between body anchored nouns and nouns signed in the neutral sign space. The latter are subdivided into signs involving simple movement and signs involving complex movement. As in DGS and BSL, reduplication is only possible for signs performed in the neutral sign space without complex movement. Although the patterns of plural formation appear to be strikingly similar across sign languages, we also find some variation, which mainly results from differences in the phonological restrictions on plural formation and the available manual and non-manual plural markers. A typological difference in the phonological restrictions can be found between DGS, on the one hand, and ASL and NGT, on the other. Unlike DGS, NGT allows simple reduplication of at least some body anchored nouns like glasses and man (cf. Nijhof/Zwitserlood 1999; Harder 2003; Pfau/Steinbach 2006b). In DGS, simple reduplication is neither possible for the phonologically identical sign glasses, nor for the phonologically similar sign man. While there are differences with respect to the behavior of body anchored nouns, nouns with inherent complex movement and nouns performed in the lateral sign space or symmetrically to the midsagittal plane seem to behave alike in DGS and NGT. Only the latter permit sideward reduplication in both sign languages.
118
II. Morphology ASL also differs from DGS in that reduplication in plural formation is less constrained. Moreover, ASL uses additional plural marking strategies. Only one of the four strategies of plural formation in ASL discussed in Wilbur (1987) is also found in DGS. The first strategy applies to nouns articulated with one hand at a location on the face. With these nouns the plural form is realized by repeating the sign alternately with both hands. The second strategy applies to nouns that make contact with some body part or involve a change of orientation. In this case, the plural form is realized by reduplication. Typically, a horizontal arc path movement is added. The third strategy holds for nouns that involve some kind of secondary movement. Such nouns are pluralized without reduplication by continuing the secondary movement (and possibly by adding a horizontal arc path movement). The fourth strategy is similar to that which has been described for DGS above: nouns that have inherent repetition of movement in their singular form cannot undergo reduplication. Hence, in contrast to DGS, ASL permits plural reduplication of some body anchored nouns and nouns with complex movement and has a specific plural morpheme, i.e. a horizontal arc path. Moreover, plural reduplication of secondary movements is only possible in ASL but not in DGS. However, both languages permit sideward reduplication of lateral nouns and simple reduplication of midsagittal nouns. Skant et al. (2002) describe an interesting plural marking strategy in ÖGS which is similar to the first strategy found in ASL. With some two-handed signs like high-risebuilding, in which both hands perform a parallel upward movement, the plural is expressed by a repeated alternating movement of both hands. With one-handed nouns, the non-dominant hand can be added to perform the alternating movement expressing the plural feature. This strategy can be analyzed as a modality-specific stem internal change. A similar strategy is reported in Heyerick and van Braeckevelt (2008) and Heyerick et al. (2009), who mention that in VGT, two referents (i.e. dual) can be expressed by articulating a one-handed sign with two hands, i.e. ‘double articulation’. A non-manual plural marker has been reported for LIS (cf. Pizzuto/Corazza 1996). With many body anchored nouns the plural form is signed with an accompanying head movement from left to right (at least three times). In addition, each movement is marked with a head-nod. Moreover, in LIS inherent (lexical) repetitions tend to be reduced to a single movement if the non-manual head movement accompanies the plural form of the noun. Let us finally turn to two languages that mainly use the zero marking strategy. In IPSL, all nouns can be interpreted as singular or plural because IPSL does not use overt plural marking strategies such as simple or sideward reduplication (cf. Zeshan 2000). The interpretation of a noun depends on the syntactic and semantic context in which it appears. Zeshan points out that the lateral noun child is the only noun in IPSL with a morphologically marked plural form that occurs with some frequency. Just like the phonologically similar lateral sign in DGS (cf. Figure 6.4 above), child in IPSL also permits sideward reduplication. Likewise, Zwitserlood, Perniss, and Özyürek (2011) report that TİD does not exhibit overt morphological marking of the plural feature on the noun. Instead, plurality is expressed by a variety of spatial devices, which reflect the topographic relations between the referents. These spatial devices will be discussed in section 4 below in more detail. Zwitserlood, Perniss, and Özyürek argue that although information about the number of referents falls out as a result of the use of sign space, “the primary linguistic function of these devices is [...] not the
6. Plurality expression of plurality [...], but rather the depiction of referent location, on the one hand, and predicate inflection, on the other hand”. They conclude that TİD, like IPSL, does not have a productive morphological plural marker (but see Kubuş (2008) for a different opinion). The absence of overt plural marking in IPSL and TİD is, however, not exceptional. We will see in the next subsection that in most sign languages, overt plural marking (i.e. reduplication) is only possible if the noun phrase does not contain a numeral or quantifier. Moreover, in contexts involving spatial localization, it is not the noun but the classifier handshape that is (freely) reduplicated. Besides, Neidle (this volume) argues that in ASL “reduplication may be correlated with prosodic prominence and length” (cf. chapter 13 on noun phrases). Therefore, plural reduplication is more likely to occur in prosodically prominent positions, i.e. in sentence-final position or in positions marked as focus. Consequently, reduplication is only grammatical for a small class of nouns in a limited set of contexts and even with lateral and midsagittal nouns we frequently find zero marking. Hence, reduplication is expected to be rare although it is the basic morphological means of plural formation in sign languages (cf. also BakerShenk/Cokely 1980).
2.1.5. Summary Reduplication and zero marking appear to be two basic pluralization strategies in the nominal domain attested in many different sign languages. Besides simple and sideward reduplication, some sign languages have at their disposal (alternating) movement by the non-dominant hand, reduplication of secondary movements, a horizontal arc path movement, and non-manual means. The general phonological restrictions on overt plural marking seem to be very similar across sign languages: sideward reduplication is restricted to lateral nouns and simple movement to midsagittal nouns. Nouns with complex movement only allow zero marking. Only within the class of body anchored nouns do we find some variation between languages: some sign languages permit simple reduplication of body anchored nouns, while others do not.
2.2. Pluralization and number agreement within noun phrases This section deals with plural marking within the noun phrase, which is an important domain for the realization of grammatical features such as gender, case, and number. Therefore, in many languages, pluralization does not only affect nouns but also other elements within the noun phrase such as determiners and adjectives. Moreover, we find a considerable degree of variation in the realization of the number feature within the noun phrase: while some languages show number agreement between nouns, adjectives, and numerals or quantifiers, others do not. Here we focus on sign languages. Spoken languages will be discussed in section 6. For number marking and number agreement within the noun phrase, see also chapter 13 on noun phrases. Languages with overt plural marking on head nouns have two options: they can express the plural feature more than once within the noun phrase or they only express plurality on one element within the noun phrase. In the latter case, plural is usually
119
120
II. Morphology (semantically) expressed by a numeral or quantifier and the head noun is not inflected for number. Most sign languages belong to the second class of languages, i.e. languages without number agreement within the noun phrase. In the previous subsection, we have seen that in sign languages, plural reduplication is only found with some nouns in some contexts and we already mentioned that one reason for this infrequency of overt nominal plural marking is that simple and sideward reduplication is blocked whenever a numeral or quantifier appears within the noun phrase, as is illustrated by the DGS examples in (3ab). Similarly, in noun phrases containing an adjective, the plural feature is only expressed on the head noun even if the adjective has all relevant phonological properties for simple or sideward reduplication. Again, noun phrase internal number agreement is blocked (3c). (3)
a. * many child>C>C ‘many children’ b. * five bookCC ‘five books’ c. * child>C>C tall>C>C ‘tall children’
a’. many child ‘many children’ b’. five book ‘five books’ c’. child>C>C tall ‘tall children’
[DGS]
The prohibition against number agreement within the noun phrase is a clear tendency but not a general property of all sign languages. ASL and Israeli SL are similar to DGS in this respect (Wilbur 1987; Stavans 1996). In ASL, for instance, quantifiers like many, which are frequently used in plurals, also block overt plural marking on the head noun. Nevertheless, sign languages, like spoken languages, also differ from each other with respect to number agreement within the noun phrase. In NGT, ÖGS (Skant et al. 2002), LIS (Pizzuto/Corazza 1996), and Hausa SL (Schmaling 2000), number agreement within the noun phrase seems to be at least optional.
2.3. Summary Given the phonological restrictions on plural marking and the restrictions on number agreement, plural reduplication is correctly predicted to be rare in simple plurals. Although reduplication can be considered the basic morphological plural marker, it is rarely found in sign languages since it is blocked by phonological and syntactic constraints (cf. also section 5 below). Table 6.1 illustrates the plural marking strategies and the manual and non-manual plural markers used in different sign languages. ‘√’ stands for overt marking and ‘:’ for zero marking. The strategy that seems to be typologically less frequent or even nonexistent is given in parentheses. Note that Table 6.1 only illustrates first tendencies. More typological research is necessary to get a clearer picture of nominal plural marking in sign languages.
6. Plurality
121
Tab. 6.1: Plural marking strategies in sign languages phonological feature body anchored
complex movement
midsagittal
lateral
noun
: (√)
: (√)
√ (:)
√ (:)
noun with numeral/ quantifier
:
:
: (√)
: (√)
– simple reduplication – horizontal arc path movement
– simple reduplication – alternating movements
– sideward reduplication
manual and nonmanual plural markers
– simple reduplication – double articulation – alternating movements – horizontal arc path movement – head movement and head nod
3. Pronouns, numeral incorporation, and number signs In spoken languages, pronouns usually realize at least two morphological features, namely person and number. Similarly, sign language pronouns also realize these two features. As opposed to spoken languages, however, sign languages do not employ distinct forms (cf. English I, you, he/she/it, we, you, they) but systematically use the sign space to express person and number. Concerning person, there is a long-standing debate whether sign languages distinguish second and third person. By contrast, the realization of number on pronouns is more straightforward (for a more detailed discussion of this issue, cf. McBurney (2002), Cormier (2007), and chapter 11, Pronouns).
3.1. Pronouns Sign languages typically distinguish singular, dual, and distributive and collective plural forms of pronouns. In the singular form, a pronoun usually points with the index finger directly to the location of its referent in sign space (the R-locus). The number of extended fingers can correspond to the number of referents. In DGS, the extended index and middle finger are used to form the dual pronoun 2-of-us which oscillates back and forth between the two R-loci of the referents the pronoun is linked to. In some sign languages, the extension of fingers can be used to indicate up to nine referents. We come back to numeral incorporation below. The collective plural form of a
122
II. Morphology pronoun is realized with a sweeping movement across the locations in sign space associated with the R-loci of the referents. These R-loci can either be in front of the signer (non-first person) or next to the signer including the signer (first person). By contrast, the distributive form involves multiple repetitions of the inherent short pointing movement of the pronoun along an arc. Plural pronouns are usually less strictly related to the R-loci of their referents than singular pronouns. An interesting question is, whether sign languages have a privileged (lexicalized) dual pronoun, which is not derived by numeral incorporation. The dual form seems to differ from number incorporated pronouns. While the dual form is performed with a back and forth movement, pronouns with numeral incorporation are performed with a circular movement. Moreover, number marking for the dual form seems to be obligatory, whereas the marking of three or more referents by numeral incorporation appears to be optional (cf. McBurney 2002).
3.2. Numeral incorporation A modality-specific property of sign languages is the specific kind of numeral incorporation found with pronouns, as illustrated in (4), and temporal expressions, as illustrated in (5). Numeral incorporation has been documented for various sign languages (see Liddell (1996) for ASL, chapter 11 on pronouns, for BSL, Perniss (2001) and Mathur/Rathmann (2011) for DGS, Schmaling (2000) for Hausa SL, Zeshan (2000) for IPSL, Stavans (1996) for Israeli SL, Zeshan (2002) for TİD, and Heyerick/van Braeckevelt (2008) and Heyerick et al. (2009) for VGT). (4) (5)
Numeral incorporation with pronouns 2-of-us, 3-of-us, …, 2-of-you, 3-of-you, …, 2-of-them, 3-of-them, … Numeral incorporation with temporal expressions a. 1-hour, 2-hour, 3-hour, … b. 1-week, 2-week, 3-week, … c. 1-year, 2-year, 3-year, … d. in-1-day, in-2-day, in-3-day, … c. before-1-year, before-2-year, before-3-year, …
[DGS] [DGS]
Pronouns and temporal expressions have the ability to ‘incorporate’ the handshape of numerals. Usually, the handshape corresponds to the numeral used in a sign language (cf. below). Number incorporated pronouns are performed with a small circular movement in the location associated with the group of referents. Because of the physical properties of the two manual articulators, sign languages can in principle incorporate numbers up to ten. With pronouns, five seems to be the preferred upper limit of incorporation (note, however, that examples with more than five are attested). With temporal expressions, examples that incorporate numbers up to ten are more frequent. The specific restrictions on pronominal numeral incorporation may be related to the following difference between pronouns and temporal expressions. Unlike temporal expressions, number incorporated pronouns involve a small horizontal circular movement in a specific location of the sign space. This particular movement between the R-loci the pronoun is linked to is harder to perform with two hands and may therefore be blocked
6. Plurality for phonetic reasons (cf. also section 4 for phonetic blocking of plural forms of agreement verbs). By contrast, temporal expressions are not linked to loci in the sign space. Therefore, a two-handed variant is generally easier to perform. Finally note that phonological properties of individual number signs such as the specific movement pattern of ten in ASL can block numeral incorporation.
3.3. Number signs So far, we have seen that numeral incorporation targets the handshape of the corresponding number sign. But where do the number signs come from? Number systems of sign languages are constrained by the physical properties of the articulators. Since sign languages use two manual articulators with five fingers each, they can directly express the numbers 1 to 10 by extension of the fingers. Hence, the number systems used in many sign languages have a transparent gestural basis. For number systems in different sign languages, see Leybaert and van Cutsem (2002), Iversen, Nuerk, and Willmes (2004), Iversen et al. (2006), Iversen (2008), Fernández Viader and Fuentes (2008), McKee, McKee, and Major (2008), and Fuentes et al. (2010). Since the manual articulators have 10 fingers, the base of many sign language number systems is usually 10. The DGS number system is based on 10 with a sub base of 5. By contrast, ASL uses a number system that is only based on 10. In addition to this typological variation, we also find variation within a system. This ‘dialectal’ variation may affect the use of extended fingers, the use of movement to express numbers higher than 10, or idiosyncratic number signs. Let us consider the number system of DGS first. The first five numbers are realized through finger extension on the dominant hand. one is expressed with one finger extended (either thumb or index finger), two with two fingers extended (either thumb and index finger or index and middle finger), three with three fingers extended (thumb, index and middle finger), and four with four fingers extended (either thumb to ring finger or index finger to pinky). Finally, five is expressed with all five fingers extend. The number signs six to ten are expressed on two hands. The non-dominant hand has all five fingers extended and the dominant hand expresses six to ten just like one to five. Number signs for numbers higher than 10 are derived from this basis. In DGS, the number signs eleven, twelve, thirteen, … as well as twenty, thirty, … and one-hundred, two-hundred, three-hundred … use the same handshape as the basic number signs one to nine. In addition, they have a specific movement expressing the range of the number (i.e. 11 to 19, 20 to 90, 100 to 900, or 1000 to 9000). The signs for 11 to 19 are, for example, performed either with a circular horizontal movement or with a short movement, changing the facing of the hand(s) (at the beginning of this short movement, the palm is facing the signer, at the end it faces down) and the signs for 20 to 90 are produced with a repeated movement of the extended fingers. Finally note that complex numbers like 25, 225, or 2225 are composed by the basic number signs: 25 is, for instance, a combination of the signs five and twenty. An exception are the numbers 22, 33, 44, … which are expressed by sideward reduplication of two, three, four, … As opposed to DGS, ASL only uses one hand to express the basic numbers 1 to 10. one starts with the extended index finger, two adds the extended middle finger, three the ring finger, four the pinky, and five the thumb. Hence, the ASL number sign for
123
124
II. Morphology five is identical to the corresponding sign in DGS. In ASL, the number signs for 6 to 9 are expressed through contact between the thumb and one of the other four fingers: in six, the thumb has contact with the pinky, in seven with the ring finger, in eight with the middle finger, and in nine with the index finger. ten looks like one version of one in DGS, i.e. only the thumb is extended. In addition, ten has a horizontal movement of the wrist. Other one-handed number systems differ from ASL in that they use the same signs for the numbers 6 to 9 as one variant in DGS uses for 1 to 5: six is expressed with the extended thumb, seven with the extended thumb and index finger, eight with the extended thumb, index, and middle finger, … In ASL, higher numbers are expressed by producing the signs for the digits in linear order, i.e. ‘24’ = two + four, ‘124’ = one + two + four. Note that the number system of ASL, just like that of DGS, also shows some dialectal variation. A comparison of DGS and ASL shows that two-handed number systems like DGS only use five different handshapes, whereas one-handed systems like ASL use ten different handshapes. Moreover, the two-handed system of DGS expresses higher numbers through a combination of basic number and movement. The one-handed system of ASL expresses higher number by a linear combination of the signs for the digits. And finally, DGS, like German, expresses higher numbers by inversion (i.e. ‘24’ is four C twenty). In ASL, the linear order must not be inverted.
4. Verb agreement and classifier verbs In the last section, we have seen that in sign languages, pronouns can express number by modification of movement (i.e. by the addition of a sweeping movement) or by repetition of the pronoun (i.e. a distributed pointing motion towards multiple locations). In this section we will discuss two related phenomena: the plural forms of agreement verbs and classifier verbs. We will see that both use the sign space in a similar way to express plurality. A comprehensive overview of verb agreement can be found in chapter 7. Classifier verbs are extensively discussed in Zwitserlood (2003), Benedicto and Brentari (2004), and in chapter 8 on classifiers.
4.1. Verb agreement In spoken and sign languages verb agreement seems to have primarily developed from pronouns (for sign languages see Pfau/Steinbach 2006a, 2011). In both modalities, pronominalization and verb agreement are related grammatical phenomena. Hence, it comes as no surprise that agreement verbs use the same spatial means as pronouns to express pluralization. Agreement verbs agree with the referential indices of their arguments, which are realized in the sign space as R-loci. Verbs, like pronouns, have a distributive and a collective plural form. The distributive form of plural objects is, for instance, realized by multiple reduplication along an arc movement in front of the signer. In some contexts, the reduplication can also be more random and with onehanded agreement verbs, it can also be performed with both hands. The collective form is realized with a sweeping movement across the locations associated with the R-loci,
6. Plurality i.e. by an arc movement without reduplication. The plural feature is thus realized spatially in the sign space. In chapter 7, Mathur and Rathmann propose the following realizations of the plural feature in verb agreement. According to (6), the singular feature is unmarked and realized as a zero form. The marked plural feature encodes the collective reading. The distributive plural form in (6ii) may be derived by means of reduplication of the singular form (for a more detailed discussion, cf. chapter 7 on verb agreement and the references cited there). (6)
Number i. Features Plural (collective): [Cpl] 4 horizontal arc (marked) Singular: [⫺ pl] 4 Ø ii. Reduplication: exhaustive (distributive), dual
Note that phonetic constraints may cause agreement gaps. Mathur and Rathmann (2001, 2011) show that articulatory constraints block first person plural object forms such as ‘give us’ or ‘analyze us’ in ASL or third person plural object forms with reduplication of the verbs (i.e. distributive reading) like ask in ASL or tease in DGS (for phonetic constraints, cf. also chapter 2, Phonetics).
4.2. Classifier verbs Many spoken languages do not mark plural on the head noun but use specific numeral classifier constructions. Sign languages also have so-called classifier constructions. They make extensive use of classifier handshapes, which can be used with verbs of motion and location. Sign language classifiers can be compared to noun class markers in spoken languages. Classifier verbs are particularly interesting in the context of plural marking since the plurality of an entity can also be expressed by means of a spatially modified classifier verb. Consider the examples in Figures 6.5, 6.6, and 6.7, which show the pluralisation of classifier verbs. Figure 6.5 illustrates the sideward reduplication of the classifier verb. In Figure 6.6, a simple sideward movement is added to the classifier verb and in Figure 6.7 more random reduplications performed by both hands in alternation are added.
Fig. 6.5: Sideward reduplication of a classifier verb in DGS. Copyright © 2005 by Buske Verlag. Reprinted with permission.
125
126
II. Morphology
Fig. 6.6: Simple sideward movement of a classifier verb in DGS.
Fig. 6.7: Random reduplication of a classifier verb in DGS. Copyright © 2005 by Buske Verlag. Reprinted with permission.
Like verbal agreement inflection, the illustrated spatial modification of classifier verbs is a clear instance of verbal plural inflection (for a detailed discussion of the differences between classifier verbs in sign languages and numeral nominal classification in spoken languages, cf. Pfau/Steinbach 2006b). Consequently, numerals or quantifiers do not block the reduplication of the classifier handshapes. The examples in Figures 6.5 to 6.7 would also be grammatical if we added the quantifier many or the numeral five (i.e. five bike clvertical.pl+>+>). Moreover, the spatial modification of classifier verbs does not only express the plurality of the referent the classifier verb agrees with. It usually also induces the additional semantic effect of a particular spatial localization or arrangement of the referents. Interestingly, the number of reduplications and the spatial localization of agreement and classifier verbs are not grammatically restricted and can thus be modified more freely. Therefore, the whole sign space can be used, as is illustrated in the examples in Figures 6.5 to 6.7 above. If a right handed signer wants to express that exactly five bikes are standing in a certain spatial relation on the left, s/he can repeat the classifier verb five times in the left ipsilateral sign space. Conversely, the simple plural form of lateral nouns is usually restricted to two repetitions and to the lateral area of the sign space. In section 2 we mentioned that in many sign languages midsagittal nouns such as house or flower also permit sideward reduplication of the whole sign (cf. Figure 6.8). With these nouns, the semantic effect described for classifier verbs is achieved by sideward reduplication of the whole sign. Hence, under certain circumstances, sideward reduplication can also be found with midsagittal nouns. However, in this case the un-
6. Plurality
Fig.6.8: Sideward reduplication of midsagittal nouns in DGS. Copyright © 2005 by Buske Verlag. Reprinted with permission.
marked plural form, i.e. simple reduplication, blocks the simple plural interpretation. Like sideward reduplication of classifier verbs, sideward reduplication of midsagittal nouns does not only express a simple plurality of the entity the noun refers to, but also a specific spatial configuration of these entities. Again, more than two repetitions and the use of the whole sign space is possible. The spatial interpretation of sideward reduplication of agreement and classifier verbs and certain nouns is clearly modality-specific. Since sign languages make use of the three-dimensional sign space, they have the unique potential to establish a relation between plural reduplication and spatial localization of referents (for similar observations in LIS, NGT, BSL, and TİD, cf. Pizzuto/Corazza 1996; Nijhof/Zwitserlood 1999; Sutton-Spence/Woll 1999; Zwitserlood/Perniss/Özyürek 2011).
5. Pluralization across modalities Finally, in this section we compare the expression of plurality in sign languages to pluralization in spoken languages. First we discuss constraints on plural marking in spoken language before we turn to differences in the constraints on plural marking and in the output forms in both modalities.
5.1. Pluralization in spoken languages Plural marking in spoken languages has some interesting similarities to plural marking in sign languages (for a detailed discussion of spoken languages, cf. Corbett 2000). As in sign languages, plural marking in spoken languages is determined by phonological properties of the noun stem. Moreover, many spoken languages also use reduplication to express the plural feature. In section 2, we have seen that reduplication is the basic means of plural marking in sign languages. Sideward reduplication has been described as a case of partial reduplication and simple reduplication as complete reduplication. Likewise, in spoken languages, pluralization can also be realized by means of partial and complete reduplication. Partial reduplication is illustrated in example (7a) from Ilokano, where only the first syllable of the bisyllabic stem is reduplicated (Hayes/
127
128
II. Morphology Abad 1989, 357). The example from Warlpiri in (7b) is an example of complete reduplication (Nash 1986, 130). Although both modalities use complete and partial reduplication as a means of plural marking, there are also two crucial differences: (i) only sign languages allow for sideward reduplication since they use a three-dimensional sign space and (ii) reduplication in sign languages usually involves two repetitions (i.e. triplication) whereas reduplication in spoken languages usually only involves one repetition (but see Blust (2001) for some rare examples of triplication in spoken languages). (7)
a. púsa ‘cat’ b. kurdu ‘child’
a’. pus-púsa ‘cats’ b’. kurdu-kurdu ‘children’
[Ilokano] [Warlpiri]
There are two more obvious similarites between plural marking in both modalities: (i) both, sign and spoken languages, use zero marking and, (ii) the form of a plural morpheme may be determined by phonological properties of the stem. In German, for instance, zero marking is quite common (i.e. Segel (‘sail’ and ‘sails’) or Fehler (‘mistake’ and ‘mistakes’). Phonological restrictions can be found, for instance, in English and Turkish. In English, the plural suffix /z/ assimilates the feature [Gvoice] of the preceding phoneme, i.e. [z] in dogs but [s] in cats). In Turkish, suffix vowels harmonize with the last vowel of the stem with respect to certain features. In pluralization, the relevant feature for the plural suffix -ler is [G back], i.e. ev-ler (‘houses’) but çocuk-lar (‘children’). Besides these cross-modal similarities in nominal plural formation, there are two obvious differences between spoken and sign languages. First, many spoken languages, unlike sign languages, use affixation and word internal stem change as the basic means of plural inflection. Affixation is illustrated in the English and Turkish examples above. An example for stem change is the German word Mütter, which is the plural form of Mutter (‘mother’). In this example, the plural is only marked by the umlaut, i.e. a stem internal vowel change. In sign languages, stem-internal changes, which are frequently observed in other morphological operations, are rarely used for plural marking. Simultaneous reduplication of the sign by the non-dominant hand (as attested, for instance, with some ÖGS signs) is an exception to this generalization. Likewise, sign languages do not use plural affixes – one exception might be the horizontal arc path movement that is used to express plurality in some sign languages (cf. section 2). The lack of affixation in plural marking in the visual-manual modality reflects a general tendency of sign languages to avoid sequential affixation (cf. Aronoff/Meir/Sandler 2005). Second, in spoken languages, the choice of a plural form is not always constrained phonologically but grammatically (i.e. gender), semantically (i.e. semantically defined noun classes), or lexically (cf. Pfau/Steinbach 2006b). The choice of the plural form in German is, for instance, to a large extend idiosyncratic and not determined by phonological properties of the stem. This is illustrated by the German examples in (8). Although the two words in (8ab) have the same rhyme, they take different plural suffixes. In (8cd) we are dealing with two homonymous lexical items, which form their plural by means of different suffixes where only the former is accompanied by umlaut (cf. Köpke 1993; Neef 1998, 2000).
6. Plurality (8)
a. Haus ‘house’ b. Maus ‘mouse’ c. Bank ‘bench’ d. Bank ‘bank’
129 a’. Häus-er ‘houses’ b’. Mäus-e ‘mice’ c’. Bänk-e ‘benches’ d’. Bank-en ‘banks’
[German]
A further difference concerns number agreement. Unlike in most sign languages, plurality can be realized more than once within a noun phrase in many spoken languages. The English example in (9a) illustrates that some determiners display at least number agreement with the head noun (but not with the adjective). The German example in (9b) illustrates that within the noun phrase, plurality is usually expressed on all elements on the left side of the head noun, i.e. the possessive and the adjective. Note that in both languages, the presence of a numeral does not block number agreement within the noun phrase. (9)
a. these (two) old cars b. mein-e zwei alt-en Auto-s 1sg.poss-pl two old-pl car-pl ‘my (two) old cars’
[English] [German]
Other spoken languages pattern with sign languages. In Hungarian, for instance, the head noun can only be marked for plural if the noun phrase does not contain a numeral or quantifier, cf. (10) (Ortmann 2000, 251f). Hence, like in sign languages, plurality is only indicated once within the noun phrase in these languages. Hence, without numerals and quantifiers, only the head noun inflects for plural. Multiple realization of the plural feature within the noun phrase as in example (10c) leads to ungrammaticality (cf. Ortmann 2000, 2004). (10)
a. hajó ship ‘ship’ b. öt/sok hajó five/many ship ‘five/many ships’ c. gyors hajó-k fast ship-pl ‘fast ships’
a’. hajó-k ship-pl ‘ships’ b’. *öt/sok five/many ‘five/many c’. *gyors-ak fast-pl ‘fast ships’
[Hungarian]
hajó-k ship-pl ships’ hajó-k ship-pl
Finally note that in some spoken languages, plural cannot be marked on the head noun but must be marked on other elements within the noun phrase. In Japanese, for instance, a noun does not morphologically inflect for the plural feature. Example (11a) illustrates that plurality is marked within the noun phrase by means of numerals or quantifiers, which are accompanied by numeral classifiers, cf. Kobuchi-Philip (2003). In Tagalog, plurality is also expressed within the noun phrase by means of a number word, i.e. mga, as illustrated in (11b), cf. Corbett (2000, 133f).
130
II. Morphology (11)
a. [san-nin-no gakusei-ga] hon-o katta 3-cl-gen student-nom book-acc bought ‘Three students bought a book.’ b. mga bahay b’. mga tubig pl house pl water ‘houses’ ‘cups/units of water’
[Japanese]
[Tapalog]
Spoken languages like Japanese and Tagalog equal IPSL, where nouns cannot be reduplicated and the plural feature must be expressed by a numeral or quantifier. However, unlike in Japanese and Tagalog, in most sign languages, nouns can be overtly inflected for plural and numerals and quantifiers only block overt plural marking on the head noun within the noun phrase.
5.2. Output forms So far, we discussed differences and similarities in the constraints on plural formation in spoken and sign languages. Now we turn to the output of plural formation. In plural formation, we do not only find examples of simple determination but also examples of under-, over-, and hyperdetermination of the plural feature. Let us first consider morphological underdetermination. Underdetermined plurals involve zero marking and are attested in both modalities. The second category, simple determination, is quite common in spoken languages since in affixation, stem internal change or reduplication, one morphological marker is usually used to expresses the plural feature overtly (i.e. an affix, a stem internal change, or a reduplicant respectively). By contrast, in sign language, there is no case of simple determination of the plural feature. Reconsider midsagittal nouns, which typically allow simple reduplication. At first sight, the plural form of the noun book in Figure 6.3 above looks like a case of simple determination. The plural feature is only expressed once by means of reduplication. No additional morphological marker is used. However, as already mentioned above, in sign languages the base noun is not only repeated once but twice, i.e. it is triplicated. Actually, a single repetition of the base noun would be sufficient to express the plural feature. Therefore, triplication can be analyzed as an instance of the third category, i.e. overdetermination. In spoken languages, overdetermination usually involves double marking (i.e. stem change in combination with affixation) as illustrated in (8a’–c’) above. Double marking clearly overdetermines the plural form since it would suffice to express the plural form by one marker only. The fourth category, hyperdetermination, is only attested in sign language pluralization. Recall that the plural form of lateral nouns such as child in Figure 6.4 above combine triplication with sideward movement (i.e., the reduplicant is not faithful to the base with respect to location features). This type of double overdetermination can be categorized as an instance of hyperdetermination. While overdetermination of morphosyntactic categories (e.g., number, agreement, or negation) is quite common in spoken languages, hyperdetermination is rare. The following table taken from Pfau and Steinbach (2006b, 176) summarizes the main similarities and differences in the strategies, quantities, and morphosyntax of plural marking in both modalities. Recall that affixation and stem change may not be
6. Plurality
131
Tab. 6.2: Plural marking in spoken and sign languages spoken languages
sign languages
plural marking: strategy zero marking affixation reduplication stem change
√ √ √ √
√ ⫺ √ ⫺
plural marking: quantity underdetermination simple determination overdetermination hyperdetermination
√ √ √ ??
√ ⫺ √ √
expression of plural within the noun phrase use of numeral classifiers number agreement in the noun phrase
√/⫺ √/⫺
⫺ √/⫺
complete absent in sign languages. Nevertheless, both morphological operations are at least very rare.
5.3. The impact of modality How can we account for the differences between spoken and sign languages discussed in the previous sections? The first obvious difference is that only spoken languages frequently use affixation in plural formation. We already mentioned that the lack of affixation in sign languages reflects a tendency of the visual-manual modality to avoid sequential affixation (cf. Aronoff/Meir/Sandler 2005). Moreover, the use of sign space in verb agreement and classifier verbs is also directly related to the unique property of the visual-manual modality to use a three-dimensional sign space in front of the signer to express grammatical or topographic relations. Another interesting difference is that the two basic plural marking strategies in sign languages involve either over- or hyperdetermination. Again, this difference seems to be due to specific properties of the visual-manual modality (cf. Pfau/Steinbach 2006b). Over- and hyperdetermination seem to increase the visual salience of signs in the sign space. Since much of the manual signing is perceived in peripheral vision, triplication as well as spatial displacement enhances phonological contrasts (cf. Siple 1978; Neville/Lawson 1987). In pluralization, nouns seem to exploit as many of these options as they can. This line of argumentation is supported by the claim that movements are functionally comparable to sonorous sounds in spoken language. Sign language syllables can be defined as consisting of one sequential movement. Triplication increases the phonological weight of the inflected sign (for syllables in sign language, see chapter 3 on phonology). Another determining factor might be that a fair number of signs already inherently involve lexical repetition. Hence, triplication distinguishes lexical repetition from morphosyntactic modification and is therefore a common feature in the morphosyntax of sign languages. Various
132
II. Morphology types of aspectual modification, for instance, also involve triplication (or even more repetitions, cf. chapter 9 on Tense, Aspect, and Modality). The clear tendency to avoid number agreement within noun phrases in sign languages can be related to modality-specific properties of the articulators. Sign language articulators are relatively massive and move in the transparent sign space (Meier 2002). This is true especially for the manual articulators involved in plural reduplication. Therefore, an economy constraint might block reduplication of the head noun in noun phrases whenever it is not necessary to express the plural feature (i.e. if the noun phrase contains a numeral or quantifier). Likewise, the strong influence of phonological features on plural formation can be explained by these specific properties of the articulators. In sign languages, many morphological operations such as verb agreement, classification, or reciprocity depend on phonological properties of the underlying stem and many morphemes consist of just one phonological feature (cf. Pfau/Steinbach (2005a) and chapter 3, Phonology; for similar effects on the interface between phonology and semantics, cf. Wilbur (2010)).
6. Conclusion We have illustrated that sign languages use various plural marking strategies in the nominal and verbal domain. In the nominal domain, plurals are typically formed by simple or sideward reduplication of the noun or by zero marking. Strictly speaking sign languages do not use reduplication but triplication, i.e. two repetitions of the base sign. Besides, some sign languages have specific strategies at their disposal such as an additional sweep movement, movement alternation or non-manual markers. In all sign languages investigated so far, the nominal strategies are basically constrained by phonological properties of the underlying nominal stem. Another typical property of many (but not all) sign languages is that plural reduplication of the head noun is blocked if the noun phrase contains a numeral or quantifier. Consequently, reduplication is only possible in bare noun phrases and therefore predicted to be infrequent. In the verbal domain, sign languages make use of the sign space to inflect agreement and classifier verbs for plural. The comparison of sign languages to spoken languages has revealed that there are some common strategies of pluralization in both modalities but also some modalityspecific strategies and restrictions. Among the strategies both modalities choose to mark plurality on nouns are reduplication and zero marking. By contrast, affixation and stem internal changes are a frequent means of spoken language pluralization but not (or only rarely) found in sign language pluralization. Another similarity between both modalities is that the choice of strategy may depend on phonological properties of the underlying noun. Moreover, in both modalities, noun phrase internal number agreement may be blocked. However, while in sign languages number agreement within the noun phrase seems to be the exception, number agreement is quite common in many spoken languages. And finally, while under- and overdetermination can be found in both modalities, simple determination is attested only in spoken languages and hyperdetermination only in sign languages.
6. Plurality Of course, much more research on the typology of pluralization in sign languages is necessary in order to document and understand the extent of phonological, morphological, and syntactic variation across different sign languages and across spoken and sign languages.
7. Literature Aronoff, Mark/Meir, Irit/Sandler, Wendy 2005 The Paradox of Sign Language Morphology. In: Language 81, 301⫺344. Baker-Shenk, Charlotte/Cokely, Dennis 1980 American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Silver Spring, MD: T.J. Publishers. Benedicto, Elena/Brentari, Diane 2004 Where Did All the Arguments Go? Argument-changing Properties of Classifiers in ASL. In: Natural Language & Linguistic Theory 22, 743⫺810. Blust, Robert 2001 Thao Triplication. In: Oceanic Linguistics 40, 324⫺335. Corbett, Greville G. 2000 Number. Cambridge: Cambridge University Press. Cormier, Kearsy 2007 Do All Pronouns Point? Indexicality of First Person Plural Pronouns in BSL and ASL. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 63⫺101. Fernández-Viader, María del Pilar/Fuentes, Mariana 2008 The Systems of Numerals in Catalan Sign Language (LSC) and Spanish Sign Language (LSE): A Comparative Study. In: Quadros, Ronice M. de (ed.), Sign Languages: Spinning and Unraveling the Past, Present, and Future. Forty-five Papers and Three Posters from the 9th Theoretical Issues in Sign Language Research Conference, Florianopolis, Brazil, December 2006. Petrópolis: Editora Arara Azul. [Available at: www.editoraarara-azul.com.br/EstudosSurdos.php]. Fuentes, Mariana/Massone, María Ignacia/Fernández-Viader, María del Pilar/Makotrinsky, Alejandro/Pulgarín, Francisca 2010 Numeral-incorporating Roots in Numeral Systems: A Comparative Analysis of Two Sign Languages. In: Sign Language Studies 11, 55⫺75. Harder, Rita 2003 Meervoud in de NGT. Manuscript, Nederlands Gebarencentrum. Hayes, Bruce/Abad, May 1989 Reduplication and Syllabification in Ilokano. In: Lingua 77, 331⫺374. Heyerick, Isabelle/Braeckevelt, Mieke van 2008 Rapport Onderzoeksmethodologie Meervoudsvorming in Vlaamse Gebarentaal. Vlaamse GebaarentaalCentrum (vgtC), Antwerpen. Heyerick, Isabelle/Braeckevelt, Mieke van/Weerdt, Danny de/Van der Herreweghe, Mieke/Vermeerbergen, Myriam 2009 Plural Formation in Flemish Sign Language. Paper Presented at Current Research in Sign Linguistics (CILS), Namur. Iversen, Wiebke 2008 Keine Zahl ohne Zeichen. Der Einfluss der medialen Eigenschaften der DGS-Zahlzeichen auf deren mentale Verarbeitung. PhD Dissertation, University of Aachen.
133
134
II. Morphology Iversen, Wiebke/Nuerk, Hans-Christoph/Jäger, Ludwig/Willmes, Klaus 2006 The Influence of an External Symbol System on Number Parity Representation, or What’s Odd About 6? In: Psychonomic Bulletin & Review 13, 730⫺736. Iversen, Wiebke/Nuerk, Hans-Christoph/Willmes, Klaus 2004 Do Signers Think Differently? The Processing of Number Parity in Deaf Participants. In: Cortex 40, 176⫺178. Jones, M./Mohr, K. 1975 A Working Paper on Plurals in ASL. Manuscript, University of California, Berkeley. Kobuchi-Philip, Mana 2003 Syntax and Semantics of the Japanese Floating Numeral Quantifier. Paper Presented at Incontro di Grammatica Generativa XXIX, Urbino. Köpcke, Klaus-Michael 1993 Schemata bei der Pluralbildung im Deutschen: Versuch einer kognitiven Morphologie. Tübingen: Narr. Kubuş, Okan 2008 An Analysis of Turkish Sign Language (TİD) Phonology and Morphology. MA Thesis, The Middle East Technical University, Ankara. Leybaert, Jacqueline/Cutsem, Marie-Noelle van 2002 Counting in Sign Language. In: Journal of Experimental Child Psychology 81, 482⫺501. McBurney, Susan Lloyd 2002 Pronominal Reference in Signed and Spoken Language: Are Grammatical Categories Modality-dependent? In: Meier, Richard/Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 329⫺369. McKee, David/McKee, Rachel/Major, George 2008 Sociolinguistic Variation in NZSL Numerals. In: Quadros, Ronice M. de (ed.), Sign Languages: Spinning and Unraveling the Past, Present, and Future. Forty-five Papers and Three Posters from the 9th Theoretical Issues in Sign Language Research Conference, Florianopolis, Brazil, December 2006. Petrópolis: Editora Arara Azul. [Available at: www.editora-arara-azul.com.br/EstudosSurdos.php]. Mathur, Gaurav/Rathmann, Christian 2001 Why not ‘give-us’: An Articulatory Constraint in Sign Languages. In: Dively, Valerie/ Metzger, Melanie/Taub, Sarah/Baer, Anne-Marie (eds.), Signed Languages: Discoveries from International Research. Washington, DC: Gallaudet University Press, 1⫺26. Mathur, Gaurav/Rathmann, Christian 2010 Verb Agreement in Sign Language Morphology. In: Brentari, Diane (ed.), Sign Languages (Cambridge Language Surveys). Cambridge: Cambridge University Press, 173⫺196. Mathur, Gaurav/Rathmann, Christian 2011 Two Types of Nonconcatenative Morphology in Signed Language. In: Mathur, Gaurav/ Napoli, Donna Jo (eds.), Deaf Around the World. Oxford: Oxford University Press, 54⫺82. Meier, Richard 2002 Why Different, Why the Same? Explaining Effects and Non-effects of Modality Upon Linguistic Structure in Sign and Speech. In: Meier, Richard/Cormier, Kearsy/QuintoPozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 1⫺25. Neef, Martin 1998 The Reduced Syllable Plural in German. In: Fabri, Ray/Ortmann, Albert/Parodi, Teresa (eds.), Models of Inflection. Tübingen: Niemeyer, 244⫺265. Neef, Martin 2000 Morphologische und syntaktische Konditionierung. In: Booij, Geert et al. (ed.), Morphologie: Ein internationales Handbuch zur Flexion und Wortbildung. Berlin: de Gruyter, 473⫺484.
6. Plurality Neville, Helen J./Lawson, Donald S. 1987 Attention to Central and Peripheral Visual Space in a Movement Detection Task: An Event-related Potential and Behavioral Study (Parts I, II, III). In: Brain Research 405, 253⫺294. Nijhof, Sibylla/Zwitserlood, Inge 1999 Pluralization in Sign Language of the Netherlands (NGT). In: Don, Jan/Sanders, Ted (eds.), OTS Yearbook 1998⫺1999. Utrecht: Utrechts Instituut voor Linguistiek OTS, 58⫺78. Ortmann, Albert 2000 Where Plural Refuses to Agree: Feature Unification and Morphological Economy. In: Acta Linguistica Hungarica 47, 249⫺288. Ortmann, Albert 2004 A Factorial Typology of Number Marking in Noun Phrases: The Tension of Economy and Faithfulness. In: Müller, Gereon/Gunkel, Lutz/Zifonun, Gisela (eds.), Explorations in Nominal Inflection. Berlin: Mouton de Gruyter, 229⫺267. Perniss, Pamela 2001 Numerus und Quantifikation in der Deutschen Gebärdensprache. MA Thesis, University of Cologne. Perry, Deborah 2004 The Use of Reduplication in ASL Plurals. MA Thesis, Boston University. Pfau, Roland/Steinbach, Markus 2003 Optimal Reciprocals in German Sign Language. In: Sign Language & Linguistics 6, 3⫺42. Pfau, Roland/Steinbach, Markus 2005a Backward and Sideward Reduplication in German Sign Language. In: Hurch, Bernhard (ed.), Studies on Reduplication. Berlin: Mouton de Gruyter, 569⫺594. Pfau, Roland/Steinbach, Markus 2005b Plural Formation in German Sign Language: Constraints and Strategies. In: Leuninger, Helen/Happ, Daniela (eds.), Gebärdensprache. Struktur, Erwerb, Verwendung (Linguistische Berichte Special Issue 13). Opladen: Westdeutscher Verlag, 111⫺144. Pfau, Roland/Steinbach, Markus 2006a Modality-independent and Modality-specific Aspects of Grammaticalization in Sign Languages. In: Linguistics in Potsdam 24, 5⫺94. Pfau, Roland/Steinbach, Markus 2006b Pluralization in Sign and in Speech: A Cross-modal Typological Study. In: Linguistic Typology 10, 135⫺182. Pfau, Roland/Steinbach, Markus 2011 Grammaticalization in Sign Languages. In: Heine, Bernd/Narrog, Heiko (eds.), Handbook of Grammaticalization. Oxford: Oxford University Press, 681⫺693. Pizzuto, Elena/Corazza, Serena 1996 Noun Morphology in Italian Sign Language. In: Lingua 98, 169⫺196. Sandler, Wendy 1999 The Medium and the Message: Prosodic Interpretation of Linguistic Content in Israeli Sign Language. In: Sign Language & Linguistics 2(2), 187⫺215. Schmaling, Constanze 2000 Maganar Hannu, Language of the Hands: A Descriptive Analysis of Hausa Sign Language. Hamburg: Signum. Siple, Patricia 1978 Visual Constraints for Sign Language Communication. In: Sign Language Studies 19, 97⫺112. Skant, Andrea/Dotter, Franz/Bergmeister, Elisabeth/Hilzensauer, Marlene/Hobel, Manuela/ Krammer, Klaudia/Okorn, Ingeborg/Orasche, Christian/Orter, Reinhold/Unterberger, Natalie 2002 Grammatik der Österreichischen Gebärdensprache. Klagenfurt: Forschungszentrum für Gebärdensprache und Hörgeschädigtenkommunikation.
135
136
II. Morphology Stavans, Anat 1996 One, Two, or More: The Expression of Number in Israeli Sign Language. In: International Review of Sign Linguistics 1, 95⫺114. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge University Press. Valli, Clayton/Lucas, Ceil 1992 Linguistics of American Sign Language: An Introduction. Washington, DC: Gallaudet University Press. Wilbur, Ronnie 1987 American Sign Language: Linguistic and Applied Dimensions. Boston: Little, Brown & Co. Wilbur, Ronnie 2010 The Semantics-Phonology Interface. In: Brentari, Diane (ed.), Sign Languages (Cambridge Language Surveys). Cambridge: Cambridge University Press, 357⫺382. Zeshan, Ulrike 2000 Sign Language in Indo-Pakistan: A Description of a Signed Language. Amsterdam: Benjamins. Zwitserlood, Inge 2003 Classifiying Hand Configurations in Nederlandse Gebarentaal. Utrecht: LOT. Zwitserlood, Inge/Perniss, Pamela/Aslı Özyürek 2011 An Empirical Investigation of Plural Expression in Turkish Sign Language (TİD): Are There Modality Effects? Manuscript, Radboud University Nijmegen and Max Planck Institute for Psycholinguistics, Nijmegen.
Markus Steinbach, Göttingen (Germany)
7. Verb agreement 1. 2. 3. 4. 5. 6.
Introduction Background on agreement Realization of agreement Candidacy for agreement Conclusion: agreement in sign and spoken languages Literature
Abstract This chapter compares several theoretical approaches to the phenomenon often labeled ‘verb agreement’ in sign languages. The overall picture that emerges is that cross-modally, there are both similarities and differences with respect to agreement. Sign languages seem to be similar to spoken languages in that they realize the person and number features of the arguments of the verbs through agreement, suggesting an agreement process that is
7. Verb agreement
Fig. 7.1: Forms of ask in ASL. The form on the left corresponds to ‘I ask you’ while the form on the right corresponds to ‘you ask me’.
available to both modalities. However, there are two important cross-modal differences. First, the agreement process in sign languages is restricted to a smaller set of verbs than seen in many spoken languages. This difference may be resolved if this restriction is taken to be parallel to other restrictions that have been noted in many spoken languages. Second, the properties of agreement are more uniform across many sign languages than across spoken languages. This peculiarity can be derived from yet another cross-modal difference: certain agreement forms in sign languages require interaction with gestural space. Thus, while the cross-modal differences are rooted in the visual-manual modality of sign languages, sign and spoken languages are ultimately similar in that they both draw on the agreement process.
1. Introduction Figure 7.1 shows two forms of the verb ask in American Sign Language (ASL). The form on the left means ‘I ask you’ while the form on the right means ‘you ask me’. Both forms have similar handshape (crooking index finger) and similar shape of the path of movement (straight), which constitutes the basic, lexical form for ask. The only difference between these two forms lies in the orientation of the hand and the direction of movement: the form on the left is oriented and moves towards an area to the signer’s left, while the form on the right is oriented and moves towards the signer’s chest. The phenomenon illustrated in Figure 7.1 is well documented in many sign languages, including, but not limited to, ASL (Padden 1983), Argentine Sign Language (Massone/Curiel 2004), Australian Sign Language (Johnston/Schembri 2007), Brazilian Sign Language (Quadros 1999), British Sign Language (Sutton-Spence/Woll 1999), Catalan Sign Language (Quer/Frigola 2006), German Sign Language (Rathmann 2000), Greek Sign Language (Sapountzaki 2005), Indo-Pakistani Sign Language (Zeshan 2000), Israeli Sign Language (Meir 1998), Japanese Sign Language (Fischer 1996), Korean Sign Language (Hong 2008), Sign Language of the Netherlands (Bos 1994; Zwitserlood/Van Gijn 2006), and Taiwanese Sign Language (Smith 1990). Some researchers have considered the change in orientation and direction of movement to mark verb agreement, since the difference between the two forms corresponds
137
138
II. Morphology to a difference in meaning that is often marked in spoken languages by person agreement with the subject and object. However, such an analysis remains controversial and has occupied a significant portion of the field of sign linguistics. Labeling this phenomenon as ‘verb agreement’ comes with many theoretical assumptions. In exploring these theoretical assumptions, this chapter addresses the core issue of whether this phenomenon can indeed qualify as ‘verb agreement’ by taking a particular angle: whether the phenomenon can be explained as the morphological realization of verb agreement in sign languages. For the purpose of this chapter, the following working definition of agreement is adopted from Steele (1978): “The term agreement commonly refers to some systematic covariance between a semantic or formal property of one element and a formal property of another.” Corbett (2006) expands on this definition by specifying that there are four main components to the systematic covariance, as listed in (1). (1)
Main components of agreement (Corbett 2006, 1) (i) controller (an element which determines agreement) (ii) target (an elemnt whose form is determined by agreement) (iii) domain (the syntactic environment within which agreement occurs) (iv) features (the properties that the controller agrees with the target in)
To examine whether the phenomenon in sign languages can be analyzed as verb agreement, the chapter first provides a brief background on the phenomenon depicted in Figure 7.1. Then, the following section discusses whether this phenomenon can be analyzed as the morphological realization of person and number features and compares several theoretical approaches to this issue. Next, on the assumption that the phenomenon is indeed the realization of person and number features, the chapter considers cases when the features are not completely realized and focuses on the issue of determining which verbs realize these features. Again, this section takes into account the latest theoretical analyses of this issue. The phenomenon is ultimately used as a case study to identify linguistic properties that are common to both spoken and sign languages and to understand the effects of language modality on these properties.
2. Background on agreement This section provides a brief background on verb agreement in sign languages for those unfamiliar with the phenomenon. There are many detailed descriptions of the phenomenon available (see, for example, Lillo-Martin/Meier (2011) and Mathur/Rathmann (2010) for a comprehensive description). Due to space, the description is necessarily condensed here. First, not all verbs undergo a change in orientation and/or direction of movement to show a corresponding change in meaning. As Padden (1983) observes for ASL, there are three classes of verbs which she labels ‘plain verbs’, ‘agreeing verbs’, and ‘spatial verbs’, respectively. The above example of ask falls into the class of agreeing verbs, which undergo the above-described phonological changes to reflect a change in meaning (specifically, who is doing the action to whom). Spatial verbs, like, for exam-
7. Verb agreement ple, move, put, and drive, change the path of movement to show the endpoints of the motion (e.g. I moved the piece of paper from here to there). Plain verbs may be inflected for aspect, but otherwise cannot be changed in the same way as agreeing and spatial verbs. Two ASL examples are cook and buy. The same tri-partite classification of verbs has been confirmed in many other documented sign languages. Within the class of ‘agreeing verbs’, verbs manifest the phenomenon shown in Figure 7.1 in different ways depending on their phonological shape. Some verbs like tell mark only the indirect/direct object (called ‘single agreement’), while others like give mark both the subject and indirect/direct object (called ‘double agreement’) (Meier 1982). Some verbs mark the subject and indirect/direct object by changing the orientation of the hands only (e.g. pity in ASL), while others show the change in meaning by changing only the direction of movement (e.g. help in ASL), and yet others show the change through both orientation and direction of movement (e.g. ask shown in Figure 7.1) (Mathur 2000; Mathur/Rathmann 2006). The various ways of manifesting the phenomenon in Figure 7.1 have sometimes been subsumed under the term ‘directionality’. In addition to marking the changes in meaning through a change in the orientation and/or direction of movement (i.e. through manual changes), other researchers have claimed that it can also be marked non-manually through a change in eye gaze and head tilt co-occurring with the verb phrase (Aarons et al. 1992; Bahan 1996; Neidle et al. 2000). They claim in particular that eye gaze and head tilt mark object and subject agreement respectively, while noting that these non-manual forms of agreement are optional. Thompson, Emmorey, and Kluender (2006) sought to evaluate the claims made by Neidle et al. (2000) by conducting an eye-tracking study. They found that eye gaze was directed toward the area associated with the object referent for 74 % of agreeing verbs and for 11% of plain verbs. Since eye gaze did not consistently co-occur with plain verbs as predicted by Neidle et al. (2000), Thompson et al. were led to conclude that eye gaze does not obligatorily mark object agreement.
3. Realization of agreement One foundational issue concerning the phenomenon illustrated in Figure 7.1 is whether it can be understood as the realization of verb agreement, and if so, what are the relevant features in the realization. There have been three general approaches to this issue: the R-locus analysis (as articulated by Lillo-Martin/Klima 1990), the indicating analysis (Liddell 2003), and the featural analysis (Padden 1983; Rathmann/Mathur 2008). For each approach, the section considers how the approach understands the mechanics behind the realization of agreement (e.g. if it is considered ‘agreement’, which elements agree with which elements in what features). The issue of how the phenomenon interacts with signing space is also discussed, as well as the implications of this interaction for cross-linguistic uniformity.
3.1. R-locus analysis The R-locus analysis was originally inspired by Lacy (1974). It was further articulated by Lillo-Martin and Klima (1990) in the case of pronouns, applied by Meir (1998, 2002)
139
140
II. Morphology and Aronoff, Sandler, and Meir (2005) to the phenomenon under discussion, and further elaborated on by Lillo-Martin and Meier (2011). In this analysis, each noun phrase is associated with an abstract referential index. The index is a variable in the linguistic system which receives its value from discourse and functions to keep the referent of the noun phrase distinct from referents of other noun phrases. The index is realized in the form of a locus, a point in signing space that is associated with the referent of the noun phrase. This locus is referred to as a ‘referential locus’ or R-locus for short. There are theoretically an infinite number of R-loci in signing space. By separating the referential index, an abstract variable, from the R-locus, the analysis avoids the listability issue, that is, it avoids the issue of listing each R-locus as a potential form in the lexicon. For further discussion of the distinction between the referential index and the R-locus, see Sandler and Lillo-Martin (2006) and Lillo-Martin and Meier (2011). Following Meir (1998, 2002), Aronoff, Meir, and Sandler (2005) have extended the R-locus analysis to the phenomenon in Israeli Sign Language (Israeli SL) and ASL and compare it to literal alliterative agreement in spoken languages like Bainouk, a Niger-Congo language, and Arapesh, a language spoken in Papua New Guinea. The mechanics of alliterative agreement is one of a copying mechanism. As an example, an initial consonant-vowel syllable of the noun phrase is copied onto an adjective or a verb as an expression of agreement. Similarly, in Israeli SL and ASL, the R-loci of the noun phrases are copied onto the verb as an expression of agreement. The ASL example constructed below illustrates how the copying mechanism works. (2)
[ s-u-e ixa ]
[ b-o-b ixb ]
R-locus for Sue ‘Sue asked Bob a question.’
aaskb
[ASL]
R-locus for Bob
Under this analysis, the phenomenon is understood as ‘agreement’ between a noun phrase and a verb in the sense that they share a referential index, which is realized overtly as an R-locus. At the same time, Aronoff, Meir, and Sandler (2005, 321) concede one difference from literal alliterative agreement in spoken languages: the R-loci that (the referents of) nouns are associated with “are not part of their phonological representations and are not lexical properties of the nouns in any way. Rather, they are assigned to nouns anew in every discourse.” While Aronoff et al. (2005) do not explicitly relate the mechanism to the realization of person and number features, the analysis is compatible with the use of such features if the features are assumed to be encoded as part of the referential index. One question is how the analysis would handle the realization of the number feature if it has a plural value (for plural see also chapter 6). The plural feature can be realized in many sign languages in two ways: (i) reduplication along an arc (called the ‘exhaustive’ form by Klima/Bellugi (1979)), which results in a distributive meaning, and (ii) movement along an arc without reduplication (labeled as the ‘multiple’ form by Klima/Bellugi), which results in a collective meaning. For the first type of realization, the analysis would need to posit separate R-loci for each of the referents associated with the plural noun phrase, while for the second type of realization, the entities referred to by the noun phrase would be associated as a group with a single R-locus.
7. Verb agreement Cormier, Wechsler, and Meier (1998) use the theoretical framework of Head-driven Phrase Structure Grammar (HPSG, Pollard/Sag 1994), a lexical-based approach, to provide an explicit analysis of agreement as index-sharing. In this framework, the noun phrase (NP) has a lexical entry which specifies the value of its index. The index is defined with respect to the locus (a location in signing space), and the locus can be one of three: the location directly in front of the signer’s chest (S), the location associated with the addressee (A), or ‘other’. This last category is further divided into distinct locations in neutral space that are labeled as i, j, k, and so forth. Thus, they view the locus as a phi-feature in ASL, which is a value of the index. The listability issue is resolved if it is assumed that the index allows an infinite number of values. The possible values for the index are summarized in (3). (3)
Index values in sign languages in HPSG framework (Cormier et al. 1998) index: [LOCUS locus] Partition of locus: S, A, other Partition of other: i, j, k, ...
According to Cormier, Wechsler, and Meier (1998), a verb has a lexical entry that is sorted according to single or double agreement and that includes specifications for phonology (PHON) and syntax and semantics (SYNSEM). The SYNSEM component contains the verb’s argument structure (ARG-ST) and co-indexes the noun phrases with their respective semantic roles in CONTENT. For example, the verb see has an argument structure of and the content of [SEER1 and SEEN2]. NP1 is coindexed with SEER, and NP2 with SEEN by virtue of the underlined indexes. This lexical entry is then expanded by a declaration specific to the verb’s sort (single- or double-agreement), which specifies the phonological form according to the values of the loci associated with the noun phrases in the argument structure (see Hahm (2006) for a more recent discussion of person and number features within the HPSG framework and Steinbach (2011) for a recent HPSG analyzis of sign language agreement). Neidle et al. (2000) have similarly suggested that phi-features are the relevant features for agreement, and that phi-features are realized by such loci. They envision agreement as a feature-checking process as opposed to an index-copying or -sharing process in the sense of Aronoff, Meir, and Sandler (2005) or Cormier, Wechsler, and Meier (1998). McBurney (2002) describes the phenomenon for pronouns in a similar way, although she reaches a different conclusion regarding the status of the phenomenon (see chapter 11 for discussion of pronouns). A more recent perspective on the R-locus analysis comes from Lillo Martin and Meier (2011, 122), who argue “that directionality is a grammatical phenomenon for person marking” and refer to “index-sharing analyses of it. The index which is shared by the verb and its argument is realized through a kind of pointing to locations which are determined on the surface by connection to para-linguistic gesture.”
3.2. Indicating analysis Liddell (1990, 1995, 2000, 2003) challenges the R-locus analysis, arguing that verbs which display the phenomenon illustrated in Figure 7.1 are best understood as being
141
142
II. Morphology
Fig. 7.2: Liddell and Metzger’s (1998, 669) illustration of the mappings between three mental spaces (cartoon space, Real space, and grounded blend). Copyright © 1998 by Elsevier. Reprinted with permission.
directed to entities in mental spaces. Since these entities do not belong to the linguistic system proper, Liddell does not consider the phenomenon to be an instance of verb agreement. Rather, he calls such verbs ‘indicating verbs’, because the verbs ‘indicate’ or point to referents just as one might gesture toward an item when saying “I would like to buy this”. Other sign language researchers such as Johnston and Schembri (2007) have adopted Liddell’s analysis in their treatment of similar phenomena in Australian Sign Language (Auslan). Two key points have inspired Liddell to develop the ‘indicating analysis’. First, it is not possible to list an infinite number of loci as agreement morphemes in the lexicon. Second, an ASL sign like ‘give-to-tall person’ is directed higher in the signing space, while ‘give-to-child’ is directed lower, as first noted by Fischer and Gough (1978). The indicating analysis draws on mental space theory (Fauconnier 1985, 1997) to generate connections between linguistic elements and mental entities. To illustrate the mechanics of the indicating analysis, an example provided by Liddell and Metzger (1998, 669) is given in Figure 7.2 and is reviewed here. Three mental spaces are required to account for one instance of look-at in ASL: a ‘cartoon space’ where the interaction between the seated cat Garfield and his owner takes place; a Real space containing mental representations of oneself and other entities in the immediate physical environment; and a grounded blend, which blends elements of the two spaces. In this blended space, the ‘owner’ and ‘Garfield’ are mapped respectively from the ‘owner’ and ‘Garfield’ in the cartoon space. From Real space, the ‘signer’ is mapped onto ‘Garfield’ in the blended space. Liddell (2003) assumes that verbs are lexically marked for whether they indicate a single entity corresponding to the object (notated as VERB/y) or two entities corresponding to the subject and the object, respectively (notated as VERBx/y). He pro-
7. Verb agreement poses a similar notation for other forms involving plurality, as well as for spatial verbs (VERB/L, where L stands for location). Similarly, constraints on the process of agreement, such as the restriction of the multiple form to the object, would have to be encoded in the lexicon. The indicating analysis could account for the uniformity of the properties surrounding the phenomenon across various sign languages by tying the phenomenon to the act of gesturing toward entities, which is universally available to every signer. The indicating analysis does not assume a morphemic analysis of the phenomenon in Figure 7.1 in terms of person and number features, yet lexicalizes them on some verb entries, e.g. those involving plurality. If a large number of verbs display such forms, the indicating analysis would need to explain why it is necessary to lexicalize the forms rather than treating the realization of the plural feature as a morphological process.
3.3. Featural analysis Rathmann and Mathur (2002, 2008) provide another kind of analysis that is somewhat a hybrid of the R-locus and indicating analyses. In a sense, the featural analysis harks back to the original analysis of Padden (1983) and suggests that verbs agree with the subject and the object in the morphosyntactic features of person and number (cf. Neidle et al. (2000) for a similar view). Rathmann and Mathur (2008) propose that the features are realized as follows. (4)
Morphosyntactic features (Rathmann/Mathur 2008) a. Person First: [C1] 4 on/near chest (marked) Non-first: [⫺ 1] 4 Ø b. Number i. Features Plural (collective): [Cpl] 4 horizontal arc (marked) Singular: [⫺ pl] 4 Ø ii. Reduplication: exhaustive (distributive), dual
The features for the category of person follow Meier (1990). First person is realized as a location on or near the chest, while non-first person is realized as a zero form. Following Rathmann and Mathur (2002), the zero morpheme for non-first person may be matched with a deictic gesture within an interface between spatio-temporal conceptual structure and the articulatory-phonetic system in the architecture of grammar as articulated by Jackendoff (2002). This interface is manifested through signing space or gestural space (as it is called by Rathmann and Mathur). The realization of person features takes place through a process called ‘alignment’ (Mathur 2000), which is an instance of a readjustment process (Rathmann/Mathur 2002). With respect to the category of number, two features are assumed. The plural feature, which is marked and encodes the collective reading, is realized as the multiple form. The possibility that the other plural forms are reduced to reduplication of the singular form is left for further investigation. The singular feature is unmarked and
143
144
II. Morphology realized as a zero form. We suggest that the realization of the multiple form occurs through affixal insertion, as evidenced by the fact that the morphological realization of number features is necessarily ordered after the realization of person features (Mathur 2002). See chapter 6, Plurality, for further discussion of plurality as it is marked on verbs.
3.4. Interface with gesture The three approaches are now compared on the basis of how they account for the interaction of verb agreement with gestural space. As mentioned earlier, the linguistic system cannot directly refer to areas within gestural space (Lillo-Martin/Klima 1990; Liddell 1995). Otherwise, one runs into the trouble of listing an infinite number of areas in gestural space in the lexicon, an issue which Liddell (2000) raises and which Rathmann and Mathur (2002) describe in greater detail and call the listability issue. For example, the claim that certain verbs ‘agree’ with areas in gestural space is problematic, because that would require the impossible task of listing each area in gestural space as a possible agreement morpheme in the lexicon (Liddell 2000). The above analyses have approached the issue of listability in different ways. The R-locus analysis avoids the listability issue by separating the R-locus from the R-index (Lillo-Martin/Klima 1990; Meir 1998, 2002). The linguistic system refers to the R-index and not to the R-locus. The connection between the R-index and the R-locus is mediated by discourse: the R-index receives its value from discourse and links to a referent, which is in turn associated with an R-locus. While the R-locus approach is clear about how non-first person is realized, the analysis leaves open the point at which the phonological content of the R-locus enters the linguistic system. On this question, LilloMartin and Meier (2011, 122) clarify that phonological specification of the R-index is not necessary; the specification is “determined on the surface by connection to paralinguistic gesture”. The indicating analysis (Liddell 2003) takes the listability issue as a signal to avoid an analysis in terms of agreement. Through mental space theory, Liddell maintains a formal separation between linguistic elements and gestural elements but permits them to interact through the blending of mental space entities. At the same time, he proposes that one must memorize which verbs are mapped with mental entities for first person forms, for non-first person forms, and for plural forms. One implication is that singular forms are mapped with mental entities to the same extent as plural forms. On the other hand, Cormier (2002) has found multiple forms to be less indexic than singular forms, suggesting that plural forms are not always mapped with mental entities in the way expected by Liddell. Another approach is the featural analysis of Rathmann and Mathur (2008), which agrees with the R-locus analysis in that the phenomenon constitutes agreement. The featural analysis sees agreement as being mediated through the features of the noun phrase instead of index-sharing or -copying. The set of features is finite ⫺ consisting just of person and number ⫺ and each feature has a finite number of values as well. Importantly, the non-first person feature is realized as a zero morpheme. Neidle et al. (2000) also recognize the importance of features in the process of agreement. They separate person from number and offer some contrasts under the feature of number.
7. Verb agreement Whereas they assume many person distinctions under the value of non-first person, the featural analysis assumes only one, namely a zero morpheme. The use of a zero morpheme is the featural analysis’s solution to the listability issue. The different approaches are compatible in several ways. First, while the R-locus analysis emphasizes the referential index and the featural analysis emphasizes features, they can be made compatible by connecting the index directly to features as has been done in spoken languages (cf. Cormier, Wechsler, and Meier 1998). Then the process of agreement can refer to these indices and features in syntax and morphology. The indicating analysis, on the other hand, rejects any process of agreement and places any person and number distinctions in the lexicon. The lexicon is one place where the indicating analysis and the featural analysis could be compatible: in the featural analysis, features are realized as morphemes which are stored in a ‘vocabulary list’ which is similar to the lexicon; if one assumes that verbs are combined with inflectional morphemes in the lexicon before syntax (and before they are combined with a gesture), the featural analysis and the indicating analysis would converge. However, the featural analysis as it stands does not assume that the lexicon generates fully inflected verbs; rather, verbs are inflected as part of syntax and spelled out through a post-lexical morphological component. Otherwise, all approaches agree that linguistic elements must be allowed to interface with gestural elements. Whereas the R-locus analysis sees the interface as occurring in discourse (the R-index is linked to a discourse referent which is associated with an R-locus), and whereas the indicating analysis sees the interface as a blending of mental space entities with linguistic elements, the featural analysis sees the interface as linking spatio-temporal conceptual structure and articulatory-phonetic systems through gestural space. There are then different ways to understand how the process of verb agreement interacts with gestural space. By investigating the different contexts in which verb agreement interfaces with gestural space, and by identifying constraints on this interface, we can begin to distinguish among predictions made by the various approaches to the issue of listability.
3.5. Cross-linguistic uniformity and variation As mentioned in section 2, there is a tri-partite classification of verbs depending on whether they show agreement. Moreover, verbs that show agreement vary between single and double agreement. Then, the way that verbs mark agreement is through a change in orientation and/or direction of movement, and finally, they interact with gestural space. It turns that all of these properties, along with other properties, are attested in many of the sign languages documented to date, as observed by Newport and Supalla (2000) (see also the references provided in section 1). To explain the uniformity of these properties across sign languages, the featural analysis looks to the development of the agreement process. Rathmann and Mathur (2008) suggest that the process of verb agreement emerges in many sign languages as a linguistic innovation, meaning that the process takes on the ability to interface with gestural space and then remains tied to this interface. Consequently, the process does
145
146
II. Morphology not become lexicalized, unlike the affixation of segmental morphemes which have potential to diverge in form across languages. While mature sign languages are relatively uniform with respect to the properties discussed above, there is also some cross-linguistic variation. For instance, sign languages vary in whether they use an auxiliary-like element to mark agreement whenever the main verb is incapable of doing so due to phonological or pragmatic reasons (Rathmann 2000; see chapter 10 for discussion of agreement auxiliaries). Then, some sign languages, e.g. those in East Asia such as Japanese Sign Language (NS), use a kind of buoy (in the sense of Liddell 2003) to direct the form to. The buoy is realized by the non-dominant hand, and instead of the dominant hand being oriented/directed to an area within gestural space, the dominant hand is oriented/directed to the buoy. The buoy could represent an argument, and could take a @-handshape (or a 0- or N-handshape in NS for male and female referents, respectively). Finally, there are sign languages which have been claimed not to show the range of agreement patterns discussed above, such as Al-Sayyid Bedouin Sign Language, a sign language used in a village in the Negev desert in Israel (Aronoff et al. 2004), and Kata Kolok, a village sign language of Bali (Marsaja 2008) (see chapter 24, Shared Sign Languages, for further discussion of these sign languages). The cross-linguistic variation across sign languages can again be accounted for by the diachronic development of the agreement process. Meier (2002) and Rathmann and Mathur (2008) discuss several studies (e.g. Engberg-Pedersen 1993; Supalla 1997; Senghas/Coppola 2001) which suggest that verb agreement becomes more sophisticated over time, in the sense that a language starts out by marking no or few person and number features and then progresses to marking more person and number features. That is, the grammaticalization of verb agreement seems to run in the direction of increasing complexity. Pfau and Steinbach (2006) have likewise outlined a path of grammaticalization for agreement, in which agreement marking and auxiliaries emerge only at the end of the path. The cross-linguistic variation across sign languages with respect to certain properties of verb agreement then can be explained by positing that the sign languages are at different points along the path of grammaticalization.
4. Candidacy for agreement Even when morphological realization of person and number features is predicted, it does not always occur. This section seeks to explain why such realization does not always occur. Rathmann and Mathur (2005) demonstrate that phonetic/phonological constraints are not the only reason that morphological realization fails to occur. Another reason is that it takes time for some verbs to become grammaticalized to realize the features of agreement (for further discussion of grammaticalization, see chapter 34). If a feature is not overtly realized on the verb, a sign language may use one of several strategies to encode the featural information. One way is to use overt pronouns (see chapter 11 on pronouns). Another way is to use word order (Fischer 1975). Yet another strategy is the insertion of an auxiliary-like element, a Person Agreement Marker (Rathmann 2000) (see chapter 10 on agreement auxiliaries).
7. Verb agreement This section focuses on the issue of how to determine which verbs participate in the process of agreement, since across sign languages only a small set of verbs participate in this process. Several approaches to this issue are considered: Padden (1983), Janis (1992, 1995), Meir (1998, 2002), Rathmann and Mathur (2002), and Quadros and Quer (2008).
4.1. Padden (1983) Padden (1983) takes a lexical approach to determining which verbs participate in agreement: verbs are marked in the lexicon as agreeing, spatial, or plain, and only those verbs that are marked as agreeing participate in the process of agreement. Cormier, Wechsler, and Meier (1998) follow this approach within the HPSG framework: each verb is sorted by its lexical entry as plain, spatial, or agreeing. If the verb is agreeing, it is further sorted as single agreement or double agreement. Liddell (2003) takes a similar approach in relegating class membership to the lexicon, as verbs are marked with diacritic symbols indicating whether they require blending with a mental entity. Such a lexical approach faces several problems. First, some verbs change their status over time. Some verbs start as plain and become agreeing over time (e.g. ASL test). Other verbs start as spatial and become agreeing (e.g. move-a-piece-of-paper becomes give in ASL). The lexical approach misses the generalization that the boundaries between the classes are not fixed, and that verbs can migrate from one class to another in principled ways. A second issue is that some verbs have dual status. That is, a verb can be agreeing in one context (cf. teach friend) and plain in another context (cf. teach linguistics). (All examples in this paragraph are from ASL.) Likewise, a verb can be agreeing in some contexts (e.g. look-at friend) or spatial in other contexts (look-at (across) banner). There are also verbs which seem spatial sometimes (drive-to school) and plain other times (drive-to everyday). Under a lexical approach, a verb would either receive two specifications or a verb would have to be listed twice, each with a unique specification. The lexical approach then raises the issue of learnability, placing the burden on the child to learn both specifications (for the acquisition of agreement, see chapter 28). The lexical approach also leaves open the issue of when to use one of these verbs in a given context. Since the lexical approach assumes that the class membership of each verb is unpredictable, it allows the possibility that each sign language assigns different verbs to each class. In fact, sign languages are largely similar with respect to the verbs that belong in each class. Thus, the lexical approach does not capture the generalization that verbs in each class share certain properties in common.
4.2. Janis (1992, 1995) Recognizing the issues that a lexical approach to the class membership of verbs is faced with, Janis (1992, 1995) has developed an account that seeks to relate the conditions
147
148
II. Morphology on verb agreement to the case properties of the controller using the agreement hierarchy in (5). (5)
Agreement Hierarchy (Janis 1995) case: direct case < locative case | GR: subject < direct object < indirect object SR: agent, experiencer', patient", recipient ' only with a verb that is not body-anchored " only if animate
Janis (1992, 1995) distinguishes between agreement in ‘agreeing’ verbs and that in spatial verbs. She links the distinction to the case of the nominal controlling agreement. A nominal receives locative case “if it can be perceived either as a location or as being at a location that affects how the action or state expressed by the verb is characterized” (Janis 1995, 219). Otherwise, it receives direct case. If a nominal receives direct case, it controls agreement only if it has a feature from the list of grammatical roles (GR) as well as a feature from the list of semantic roles (SR). This requirement is indicated by a line connecting direct case to the two lists. In contrast, a nominal with locative case does not have to meet this requirement and can control agreement in any condition. If a verb has only one agreement slot (i.e. if there is single agreement), and there are two competing controllers, the higher ranked nominal controls agreement. For example, in a sentence with a subject and a direct object, the direct object will control agreement because it is ranked above the subject in the agreement hierarchy. To account for optional subject agreement (as in double agreement), Janis (1995, 219) stipulates another condition as follows: “The lowest ranked controller cannot be the sole controller of agreement.” The lowest ranked controller in the above hierarchy is the subject. Thus, the effect of this condition is that whenever the subject controls an agreement slot, another nominal (e.g. the direct object) must control another agreement slot. The agreement hierarchy proposed by Janis (1992, 1995) comes closer to capturing the conditions under which agreement occurs. At the same time, there are at least two issues facing this approach. First, the hierarchy is complex and contains a number of conditions that are difficult to motivate. For example, the hierarchy references not only case but also grammatical roles and semantic roles. Then, the hierarchy includes stipulations like “an experiencer controls agreement only with a verb that is not bodyanchored” or “a patient controls agreement only if it is animate”. A second issue is that agreement has similar properties across many sign languages. To account for this fact, the hierarchy can be claimed to be universal for the family of sign languages. Yet it remains unexplained how the hierarchy has come into being for each sign language and why it is universal for this particular family.
4.3. Meir (1998, 2002) To simplify the constraints on the process of verb agreement in sign languages, Meir (1998, 2002) developed another account that is inspired by the distinction between
7. Verb agreement regular and backwards verbs. As mentioned earlier in the chapter, the direction of movement in many verbs displaying agreement is from the area associated with the subject referent to the area associated with the object referent. However, there is a small set of verbs that show the opposite pattern. That is, the direction of movement is from the area associated with the object referent to the area associated with the subject referent. There have been several attempts to account for the distinction between regular and backwards verbs, starting with Friedman (1976) and continuing with Padden (1983), Shepard-Kegl (1985), Brentari (1988), Janis (1992, 1995), Meir (1998, 2002), Mathur (2000), Rathmann and Mathur (2002), and Quadros and Quer (2008), among others. In short, Friedman (1976) and Shepard-Kegl (1985) propose a semantic analysis unifying regular and backwards verbs: both sets of verbs agree with the argument bearing the semantic role of source and the argument bearing the role of goal. Padden (1983) points out that such verbs do not always agree with the goal, as in ASL friend 1invite3 party (‘My friend invited me to a party’), where party is the goal yet the verb agrees with the implicit object me. This led Padden (1983) to argue for a syntactic analysis on which the verb generally agrees with the subject and the object and the backwards verbs are lexically marked for showing the agreement in a different way than regular verbs. Brentari (1988, 1998) and Janis (1992) hypothesize that a hybrid of semantic and syntactic factors is necessary to explain the distinction. Brentari (1988), for example, proposes a Direction of Transfer Rule which states that the path movement of the verb is away from the locus associated with the referent of the subject (syntactic) if the theme is transferred away from the subject (semantic), or else the movement is toward the locus of the subject referent. Meir (1998, 2002) expands on the hybrid view and proposes the two Principles of Sign Language Agreement Morphology given in (6). (6)
Principles of Sign Language Agreement Morphology (Meir 2002, 425) (i) The direction of the path movement of agreement verbs is determined by the thematic roles of the arguments: it is from the R-locus of the source argument to the R-locus of the goal argument. (ii) The facing of the hand(s) is determined by the syntactic role of the arguments: the facing is towards the object of the verb (indirect object in the case of ditransitive agreement verbs).
According to Meir, the direction of movement realizes the morpheme DIR which also appears with spatial verbs like ASL move, put, and drive-to and reflects the semantic analysis. This element unifies regular and backwards verbs, since they both move from the R-locus of the source to the R-locus of the goal (in most cases). The facing of the hand(s) realizes a case-assigning morpheme and represents the syntactic analysis. The case assigner also unifies regular and backwards verbs, since both face the R-locus of the object. The difference between regular and backwards verbs lies in the alignment between the thematic and syntactic roles: in regular verbs, the source and goal are aligned with the subject and the object respectively, while it is the other way around for backwards verbs. The analysis using the DIR morpheme and the case-assigning morpheme provides a straightforward way to categorize verbs with respect to whether they display agree-
149
150
II. Morphology ment. Plain verbs are those that do not have DIR or the case-assigning morpheme, while spatial verbs have only DIR and agreeing verbs have both. Since the case assigner is related to the notion of affectedness in the sense of Jackendoff (1987, 1990), it is predicted that only those verbs which select for an affected possessor show agreement. The analysis accounts for the uniformity of the properties of verb agreement across sign languages by attributing iconic roots to the morpheme DIR, which uses gestural space to show spatial relations, whether concrete or abstract. Presumably, the case-assigning morpheme has iconic roots such that the patterns of agreeing verbs (along with spatial verbs) are also universal.
4.4. Rathmann and Mathur (2002) To predict which verbs participate in agreement, Rathmann and Mathur (2002) propose an animacy analysis, inspired by Janis (1992, 1995), that imposes a condition on the process of verb agreement: only those verbs which select for two animate arguments may participate in the process. The featural analysis refers to Rathmann and Mathur (2008), which focuses on the features that are involved in agreement and the emergence of agreement as a process, while the animacy analysis refers to Rathmann and Mathur (2002), which seeks to characterize the set of verbs that participate in agreement and the modality differences between sign and spoken languages with respect to agreement. To support the animacy analysis, they offer a number of diagnostic tests independent of argument structure to determine whether a verb participates in the process of agreement: the ability to display the first person object form (reversibility), the ability to display the multiple form, and the ability to co-occur with pam (Person Agreement Marker, an auxiliary-like element) in sign languages that use such an element. The animacy analysis predicts that regular verbs like ASL ask and help and backwards verbs like take and copy participate in agreement. It also predicts that verbs like ASL buy or think which select for only one animate argument do not participate in agreement. It also correctly predicts that a verb like ASL teach or look-at can participate in agreement only if the two arguments are animate. This suggests that agreement is not tied to specific classes of lexical items but relates to their use in particular sentences. Thus it is possible to use the multiple form with these verbs only in a sentence like I taught many students or I looked at many students but not in a sentence like I taught many subjects or I looked across a banner. While the latter sentences look similar to the agreement forms in that the orientation and direction of movement in the verbs reflect areas associated with a referent (as in I looked at a book), they are claimed to involve a different process than agreement, since they do not take the multiple form or co-occur with pam. To account for backwards verbs, the animacy analysis assumes that the backwards movement in those verbs is lexically fixed, which may be motivated by an account like Meir (1998) or Taub (2001). When the process of agreement applies to this lexically fixed movement, the resulting form yields the correct direction of movement and orientation. Further factors such as discourse considerations, phonetic and phonological constraints, and historical circumstances determine whether the agreement form in both regular and backwards verbs is ultimately realized.
7. Verb agreement Whereas the thematic analysis of Meir (1998, 2002) takes the DIR morpheme and the case-assigning morpheme to participate in the process of agreement, the animacy analysis assumes that verbs themselves participate in the process of agreement and that they do not require a complex morphological structure to do so.
4.5. Quadros and Quer (2008) Quadros and Quer (2008) revisit the conditions on verb agreement by considering the properties of backwards verbs and auxiliaries in Brazilian Sign Language (LSB) and Catalan Sign Language (LSC). They argue against a thematic account of agreement in light of examples from LSB and LSC that share the same lexical conceptual structure but have lexicalized movements that run in the opposite direction: for instance, ask is regular in LSB but backwards in LSC, and ask-for is backwards in LSB but regular in LSC. In addition, they note that the same lexical conceptual structure in the same language can show both agreeing and non-agreeing forms, e.g. borrow in LSC. Moreover, they claim that Rathmann and Mathur’s (2008) diagnostics for distinguishing between agreeing and spatial verbs do not work in LSB and LSC, leading Quadros and Quer to question whether it is necessary to distinguish between agreeing and spatial verbs. This question will need to be addressed by carefully re-examining the diagnostic criteria for agreeing and spatial verbs across sign languages. Quadros and Quer (2008) offer an alternative view in which two classes of verbs can be distinguished according to their syntactic properties: agreeing and non-agreeing. Their class of agreeing verbs includes what have been called agreeing and spatial verbs in Padden’s (1983) typology. Semantic factors distinguish between agreeing and spatial verbs; thus, agreeing verbs (in the sense of Padden 1983) agree with R-loci which manifest person and number features, while spatial verbs agree with spatial features. Otherwise, the agreement form in both types of verbs is realized as a path. To support this view, they claim that it is possible for a verb to agree with both a nominal and a locative. By unifying the process of agreement across agreeing and spatial verbs, they remove the need for a special condition on the process of agreement. Quadros and Quer provide two pieces of evidence that agreement with R-loci constitutes syntactic agreement. First, along with Rathmann and Mathur (2008), they observe that when an auxiliary appears with a backwards verb, the direction of movement in the auxiliary is from the area associated with the subject referent to the area associated with the object referent, even when the direction of movement in the backwards verb is the opposite. Second, they note with Rathmann and Mathur (2008) that auxiliaries appear only with those backwards verbs that take animate objects and not with backwards verbs that take inanimate objects. Quadros and Quer (2008) take a view on backwards verbs that is different from that of Meir (1998) and Rathmann and Mathur (2008): they treat backwards verbs as handling verbs with a path that agrees with locations as opposed to syntactic arguments; that is, they treat them as spatial verbs. Otherwise, backwards verbs are still grouped together with regular verbs, because they adopt a broader view of verb agreement in sign languages: it is not just restricted to person and number features but also occurs with spatial features. While this broader view can explain cross-linguistic similarities with respect to properties of verb agreement, it has yet to overcome the
151
152
II. Morphology issue of listability. It is possible to resolve the listability issue with person and number issues by having a minimum of two contrastive values. It is, however, less clear whether it is possible to do the same with spatial features.
4.6. Discussion Several approaches regarding the conditions on agreement have been presented. One approach, exemplified by Padden (1983) and Liddell (2003), lets the lexicon predict when a verb participates in agreement. Janis (1992) argues that an agreement hierarchy based on case and other grammatical properties determines which verbs display agreement. Meir (1998) seeks to simplify this mechanism through a thematic approach: verbs that contain a DIR morpheme and a case-assigning morpheme qualify for agreement. Rathmann and Mathur (2008) suggest doing away with the case-assigning morpheme and restricting the process of agreement to those verbs that select for two animate arguments. Quadros and Quer (2008), on the other hand, group backwards verbs with spatial verbs and agreeing verbs with spatial verbs, thus removing the need for a special condition. Another possibility, which has recently been proposed by Steinbach (2011), is that verb agreement should be considered as part of a unified agreement process along with role shift and classifier agreement. The issue of whether verb agreement in sign languages needs a particular condition awaits further empirical investigation of the argument structure of verbs that undergo agreement and those that do not across a number of sign languages. If it turns out that there is a condition (however it is formulated) on the process of agreement, as argued by Janis (1992), Meir (1998), and Rathmann and Mathur (2008), this would be one instance in which verb agreement in sign languages differs from that in spoken languages.
5. Conclusion: agreement in sign and spoken languages We now go back to the questions raised at the beginning and consider how sign languages compare with one another and with spoken languages with respect to the realization of person and number features or the lack thereof. The preceding sections suggest the following picture. With regard to similarities across signed and spoken languages, the requirement that the set of person and number features of the arguments be realized in some way appears to be universal. The realization of person and number features can be explained through an agreement process that is common to both modalities. The agreement process, as well as the features underlying the process, may be made available by universal principles of grammar, so that it appears in both signed and spoken languages. At the same time, there are important differences between signed and spoken languages with regard to agreement. First, the agreement process in sign languages is restricted to a smaller set of verbs, whereas agreement in spoken languages, if it is marked at all, is usually marked on the whole set of verbs (setting aside exceptions).
7. Verb agreement This cross-modal difference could be resolved if the agreement process in sign languages is understood to be one of several distinct agreement processes available to sign languages, and that the choice of a particular agreement process depends on the argument structure of the verb. If that is the case, and if one takes into account that there are likewise restrictions on the agreement process in many spoken languages (Corbett 2006), sign languages are no different to spoken languages in this regard. Another cross-modal difference is that the properties of agreement are more uniform across sign languages than across spoken languages. This difference can be explained by yet another cross-modal difference: specific agreement forms in sign languages, in particular the non-first person singular form, require interaction with gestural space, whereas this interaction is optional for spoken languages. Since gestural space is universally available to all languages, and since it is involved in the realization of certain person and number features in sign languages, these considerations would explain why verb agreement looks remarkably similar across mature sign languages. The cross-modal similarities can then be traced to universal principles of grammar, while the cross-modal differences are rooted in the visual-manual modality of sign languages.
6. Literature Aarons, Debra/Bahan, Ben/Kegl, Judy/Neidle, Carol 1992 Clausal Structure and a Tier for Grammatical Marking in American Sign Language. In: Nordic Journal of Linguistics 15, 103⫺142. Aronoff, Mark/Meir, Irit/Sandler, Wendy 2005 The Paradox of Sign Language Morphology. In: Language 81, 301⫺344. Aronoff, Mark/Padden, Carol/Meir, Irit/Sandler, Wendy 2004 Morphological Universals and the Sign Language Type. In: Booij, Geert/Marle, Jaap van (eds.), Yearbook of Morphology 2004. Dordrecht: Kluwer Academic Publishers, 19⫺40. Bahan, Ben 1996 Non-manual Realization of Agreement in American Sign Language. PhD Dissertation, Boston University. Bos, Heleen 1994 An Auxiliary Verb in Sign Language of the Netherlands. In: Ahlgren, Inger/Bergman, Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure: Papers from the Fifth International Symposium on Sign Language Research, Vol. 1. Durham: International Sign Linguistic Association, 37⫺53. Brentari, Diane 1988 Backwards Verbs in ASL: Agreement Re-opened. In: MacLeod, Lynn (ed.), Parasession on Agreement in Grammatical Theory (CLS 24, Vol. 2). Chicago: Chicago Linguistic Society, 16⫺27. Cormier, Kearsy 2002 Grammaticization of Indexic Signs: How American Sign Language Expresses Numerosity. PhD Dissertation, The University of Texas at Austin. Cormier, Kearsy/Wechsler, Stephen/Meier, Richard 1998 Locus Agreement in American Sign Language. In: Webelhuth, Gert/Koenig, JeanPierre/Kathol, Andreas (eds.), Lexical and Constructional Aspects of Linguistic Explanation. Stanford, CA: CSLI, 215⫺229.
153
154
II. Morphology Engberg-Pedersen, Elisabeth 1993 Space in Danish Sign Language: The Semantics and Morphosyntax of the Use of Space in a Visual Language. Hamburg: Signum. Fauconnier, Gilles 1985 Mental Spaces: Aspects of Meaning Construction in Natural Language. Cambridge, MA: MIT Press. Fauconnier, Gilles 1997 Mappings in Thought and Language. Cambridge: Cambridge University Press. Fischer, Susan 1975 Influences on Word Order Change in American Sign Language. In: Li, Charles (ed.), Word Order and Word Order Change. Austin, TX: The University of Texas Press, 1⫺25. Fischer, Susan 1996 The Role of Agreement and Auxiliaries in Sign Languages. Lingua 98, 103⫺120. Fischer, Susan/Gough, Bonnie 1978 Verbs in American Sign Language. Sign Language Studies 18, 17⫺48. Friedman, Lynn 1976 The Manifestation of Subject, Object, and Topic in the American Sign Language. In: Li, Charles (ed.), Subject and Topic. New York, NY: Academic Press, 125⫺148. Hahm, Hyun-Jong 2006 Person and Number Agreement in American Sign Language. In: Müller, Stefan (ed.), Proceedings of the 13 th International Conference on Head-Driven Phrase Structure Grammar. Stanford, CA: CSLI, 195⫺211. Hong, Sung-Eun 2008 Eine Empirische Untersuchung zu Kongruenzverben in der Koreanischen Gebärdensprache. Hamburg: Signum. Jackendoff, Ray 2002 Foundations of Language: Brain, Meaning, Grammar, Evolution. Oxford: Oxford University Press. Janis, Wynne 1992 Morphosyntax of ASL Verb Phrase. PhD Dissertation, State University of New York, Buffalo. Janis, Wynne 1995 A Cross-linguistic Perspective on ASL Verb Agreement. In: Emmorey, Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 195⫺223. Johnston, Trevor/Schembri, Adam 2007 Australian Sign Language: An Introduction to Sign Language Linguistics. Cambridge: Cambridge University Press. Klima, Edward/Bellugi, Ursula 1979 The Signs of Language. Cambridge, MA: Harvard University Press. Lacy, Richard 1974 Putting Some of the Syntax Back Into Semantics. Paper Presented at the Annual Meeting of the Linguistic Society of America, New York. Liddell, Scott 1990 Four Functions of a Locus: Re-examining the Structure of Space in ASL. In: Lucas, Ceil (ed.), Sign Language Research: Theoretical Issues. Washington, DC: Gallaudet University Press, 176⫺198. Liddell, Scott 1995 Real, Surrogate, and Token Space: Grammatical Consequences in ASL. In: Emmorey, Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 19⫺42. Liddell, Scott 2000 Indicating Verbs and Pronouns: Pointing Away from Agreement. In: Emmorey, Karen/ Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 303⫺320.
7. Verb agreement Liddell, Scott 2003 Grammar, Gesture and Meaning in American Sign Language. Cambridge: Cambridge University Press. Liddell, Scott/Metzger, Melanie 1998 Gesture in Sign Language Discourse. In: Journal of Pragmatics 30, 657⫺697. Lillo-Martin, Diane/Klima, Edward 1990 Pointing out Differences: ASL Pronouns in Syntactic Theory. In: Fischer, Susan/Siple, Patricia (eds.), Theoretical Issues in Sign Language Research, Vol. 1: Linguistics. Chicago: University of Chicago Press, 191⫺210. Lillo-Martin, Diane/Meier, Richard 2011 On the Linguistic Status of ‘Agreement’ in Sign Languages. In: Theoretical Linguistics 37, 95⫺141. Marsaja, I Gede 2008 Desa Kolok ⫺ A Deaf Village and Its Sign Language in Bali, Indonesia. Nijmegen: Ishara Press. Massone, Maria I./Curiel, Monica 2004 Sign Order in Argentine Sign Language. In: Sign Language Studies 5, 63⫺93. Mathur, Gaurav 2000 Verb Agreement as Alignment in Signed Languages. PhD Dissertation, Massachusetts Institute of Technology. Mathur, Gaurav 2002 Number and Agreement in Signed Languages. Paper Presented at the Linguistics Association of Great Britain Spring Meeting, Liverpool. Mathur, Gaurav/Rathmann, Christian 2006 Variability in Verb Agreement Forms Across Four Sign Languages. In: Goldstein, Louis/Best, Catherine/Whalen, Douglas (eds.), Laboratory Phonology VIII: Varieties of Phonological Competence. Berlin: Mouton de Gruyter, 285⫺314. Mathur, Gaurav/Rathmann, Christian 2010 Verb Agreement in Sign Language Morphology. In: Brentari, Diane (ed.), Sign Languages: A Cambridge Language Survey. Cambridge: Cambridge University Press, 173⫺196. McBurney, Susan 2002 Pronominal Reference in Signed and Spoken Language: Are Grammatical Categories Modality-dependent? In: Meier, Richard/Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 329⫺369. Meier, Richard 1982 Icons, Analogues, and Morphemes: The Acquisition of Verb Agreement in American Sign Language. PhD Dissertation, University of California, San Diego. Meier, Richard 1990 Person Deixis in American Sign Language. In: Fischer, Susan/Siple, Patricia (eds.), Theoretical Issues in Sign Language Research, Vol. 1: Linguistics. Chicago: University of Chicago Press, 175⫺190. Meir, Irit 1998 Thematic Structure and Verb Agreement in Israeli Sign Language. PhD Dissertation, The Hebrew University of Jerusalem. Meir, Irit 2002 A Cross-modality Perspective on Verb Agreement. In: Natural Language and Linguistic Theory 20, 413⫺450. Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Ben/Lee, Robert 2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Structure. Cambridge, MA: MIT Press.
155
156
II. Morphology Newport, Elissa/Supalla, Ted 2000 Sign Language Research at the Millennium. In: Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 103⫺114 Padden, Carol 1983 Interaction of Morphology and Syntax in American Sign Language. PhD Dissertation, University of California, San Diego [Published 1988 by Garland Outstanding Dissertations in Linguistics, New York]. Pfau, Roland/Steinbach, Markus 2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign Languages. In: Linguistics in Potsdam 24, 5⫺98. Pollard, Carl/Sag, Ivan 1994 Head-Driven Phrase Structure Grammar. Chicago: University of Chicago Press. Quadros, Ronice de 1999 Phrase Structure of Brazilian Sign Language. PhD Dissertation, Pontifíca Universidade Católica do Rio Grande do Sul, Porto Alegre. Quadros, Ronice de/Quer, Josep 2008 Back to Back(wards) and Moving on: On Agreement, Auxiliaries and Verb Classes. In: Quadros, Ronice M. de (ed.), Sign Languages: Spinning and Unraveling the Past, Present, and Future. Forty-five Papers and Three Posters from the 9th Theoretical Issues in Sign Language Research Conference, Florianopolis, Brazil, December 2006. Petrópolis: Editora Arara Azul. [Available at: www.editora-arara-azul.com.br/EstudosSurdos.php]. Quer, Josep/Frigola, Santiago 2006 Cross-linguistic Research and Particular Grammars: A Case Study on Auxiliary Predicates in Catalan Sign Language (LSC). Paper Presented at Workshop on Cross-linguistic Sign Language Research, Max Planck Institute for Psycholinguistics, Nijmegen. Rathmann, Christian 2000 The Optionality of Agreement Phrase: Evidence from Signed Languages. MA Thesis, The University of Texas at Austin. Rathmann, Christian/Mathur, Gaurav 2002 Is Verb Agreement the Same Cross-modally? In: Meier, Richard/Cormier, Kearsy/ Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 370⫺404. Rathmann, Christian/Mathur, Gaurav 2005 Unexpressed Features of Verb Agreement in Signed Languages. In: Booij, Geert/Guevara, Emiliano/Ralli, Angela/Sgroi, Salvatore/Scalise, Sergio (eds.), Morphology and Linguistic Typology. On-line Proceedings of the 4th Mediterranean Morphology Meeting (MMM4). Università degli Studi di Bologna, 235⫺250. Rathmann, Christian/Mathur, Gaurav 2008 Verb Agreement as a Linguistic Innovation in Signed Languages. In: Quer, Josep (ed.), Signs of the Time: Selected Papers from TISLR 2004. Hamburg: Signum, 191⫺216. Sandler, Wendy/Lillo-Martin, Diane 2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press. Sapountzaki, Galini 2005 Free Functional Elements of Tense, Aspect, Modality and Agreement as Possible Auxiliaries in Greek Sign Language. PhD Dissertation, University of Bristol. Senghas, Ann/Coppola, Marie 2001 Children Creating Language: How Nicaraguan Sign Language Acquired a Spatial Grammar. In: Psychological Science 12, 323⫺328. Shepard-Kegl, Judy 1985 Locative Relations in American Sign Language: Word Formation, Syntax, and Discourse. PhD Dissertation, Massachusetts Institute of Technology.
7. Verb agreement
157
Smith, Wayne 1990 Evidence for Auxiliaries in Taiwan Sign Language. In: Fischer, Susan/Siple, Patricia (eds.), Theoretical Issues in Sign Language Research, Vol. 1: Linguistics. Chicago: University of Chicago Press, 211⫺228. Steinbach, Markus 2011 Dimensions of Sign Language Agreement: From Phonology to Semantics. Invited Lecture at Formal and Experimental Advances in Sign Language Theory (FEAST), Venice. Supalla, Ted 1997 An Implicational Hierarchy in Verb Agreement in American Sign Language. Manuscript, University of Rochester. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge University Press. Taub, Sarah 2001 Language from the Body: Iconicity and Metaphor in American Sign Language. Cambridge: Cambridge University Press. Thompson, Robin/Emmorey, Karen/Kluender, Robert 2006 The Relationship Between Eye Gaze and Agreement in American Sign Language: An Eye-tracking Study. In: Natural Language and Linguistic Theory 24, 571⫺604. Zeshan, Ulrike 2000 Sign Language in Indopakistan: A Description of a Signed Language. Amsterdam: Benjamins. Zwitserlood, Inge/Gijn, Ingeborg van 2006 Agreement Phenomena in Sign Language of the Netherlands. In: Ackema, Peter/ Brandt, Patrick/Schoorlemmer, Maaike/Weerman, Fred (eds.), Arguments and Agreement. Oxford: Oxford University Press, 195⫺229.
Gaurav Mathur, Washington, DC (USA) Christian Rathmann, Hamburg (Germany)
158
II. Morphology
8. Classifiers 1. 2. 3. 4. 5. 6. 7. 8.
Introduction Classifiers and classifier categories Classifier verbs Classifiers in signs other than classifier verbs The acquisition of classifiers in sign languages Classifiers in spoken and sign languages: a comparison Conclusion Literature
Abstract Classifiers (currently also called ‘depicting handshapes’), are observed in almost all sign languages studied to date and form a well-researched topic in sign language linguistics. Yet, these elements are still subject to much debate with respect to a variety of matters. Several different categories of classifiers have been posited on the basis of their semantics and the linguistic context in which they occur. The function(s) of classifiers are not fully clear yet. Similarly, there are differing opinions regarding their structure and the structure of the signs in which they appear. Partly as a result of comparison to classifiers in spoken languages, the term ‘classifier’ itself is under debate. In contrast to these disagreements, most studies on the acquisition of classifier constructions seem to consent that these are difficult to master for Deaf children. This article presents and discusses all these issues from the viewpoint that classifiers are linguistic elements.
1. Introduction This chapter is about classifiers in sign languages and the structures in which they occur. Classifiers are reported to occur in almost all sign languages researched to date (a notable exception is Adamorobe Sign Language (AdaSL) as reported by Nyst (2007)). Classifiers are generally considered to be morphemes with a non-specific meaning, which are expressed by particular configurations of the manual articulator (or: hands) and which represent entities by denoting salient characteristics. Some examples of classifier constructions from different sign languages are shown in (1): Jordanian Sign Language (LiU; Hendriks 2008, 142); Turkish Sign Language (TİD); HongKong Sign Language (HKSL; Tang 2003, 153); Sign Language of the Netherlands (NGT); Kata Kolok (KK); German Sign Language (DGS); American Sign Language (ASL; Brentari 1999, 21); and French Sign Language (LSF; Cuxac/Sallandre 2007, 18). Although little cross-linguistic work has been undertaken so far, the descriptions and examples of classifiers in various sign languages appear quite similar (except for the classifier inventories, although there, too, many similarities exist). Therefore, in this chapter, the phenomenon of classifiers will be described as comparable in all sign
8. Classifiers
languages for which they have been reported. The future will show to what extent cross-linguistic differences exist. Initially, classifier structures were considered mime-like and pantomimic, and their first descriptions were as visual imageries (e.g., DeMatteo 1977; Mandel 1977). Soon after that, however, these structures started to become analyzed as linguistic, morphologically complex signs. Notable is Supalla’s (1982, 1986) seminal work on classifiers in ASL. Numerous studies of classifiers in various sign languages have been undertaken since. Currently, classifiers are generally considered to be meaningful elements in morphologically complex structures, even though the complexity of these structures is not yet clear, and there is much controversy about the way in which they should be analyzed. The controversy is partly due to the fact that different studies use varying and sometimes unclear assumptions about the kinds of linguistic elements that classifiers in sign languages are, as well as about their function, and the types of constructions in which they occur. Space limitations do not allow extensive discussion of the various views. The main points in the literature will be explained and, where possible, related to the different views in order to obtain as much clarity as possible. This chapter is structured as follows. The next section focuses on categories of classifiers in sign languages. This is followed by a section on classifier verbs. Section 4 discusses signs in which the classifiers can be recognized but differ in various respects from the classifier verbs that are the topic of section 3. Two sections follow with an
159
160
II. Morphology overview of acquisition of classifiers in sign languages (section 5) and a comparison of classifiers in spoken and sign languages (section 6), respectively. Finally, section 7 contains some further considerations and conclusions.
2. Classifiers and classifier categories The start of the study of classifiers in sign languages coincided with (renewed) interest in classifiers in spoken languages. Research of the latter traditionally focused on the semantics of classifiers, i.e. studies were made on the assignment of nouns to particular classes, in order to understand the ways in which humans categorize the world around them. On the basis of these assignments, various categories were suggested according to which nouns are classified in different languages. In addition, different types of classifier languages (or systems) were suggested. An overview article of the characteristics, typology, and classification in 50 different classifier languages (Allan 1977) has had a large influence on research on sign language classifiers. First, (as will be further exemplified in section 6), sign languages seemed to fall into one of the four types of classifier languages suggested by Allan, viz. predicate classifier languages, where classifiers occur with verbs (in contrast to appearing with numerals, nouns, or in locative constructions as in Allan’s other three types of classifier languages). Second, in the spoken language literature, several semantic dimensions were distinguished according to which nouns were classified, such as material (including animacy), shape, consistency, size, location, arrangement, and quanta (see Allan 1977; but also Denny 1979; Denny/Creider 1986; Adams 1986). Similarly, much of the initial work on sign language classifiers has focused on semantic classification.
2.1. Classifier categories Supalla (1982, 1986) considers ASL a predicate classifier language in Allan’s categorization and categorizes the classifiers of ASL into five main types, some of which are divided into subtypes: 1. 2.
3.
Semantic classifiers, which represent nouns by some semantic characteristic of their referents (e.g., belonging to the class of humans, animals, or vehicles); Size and Shape Specifiers (SASSes), which denote nouns according to the visualgeometric features of their referents. SASSes come in two subtypes: ⫺ static SASSes, which consist of a handshape (or combination of two hands) that indicates the size/shape of an entity; ⫺ tracing SASSes, which have a movement of the hand(s) that outlines an entity’s size/shape, and in which the shape of the manual articulator denotes the dimensionality of that entity; Instrumental classifiers, which also come in two types: ⫺ instrumental hand classifiers, in which the hand represents a hand that holds and/or manipulates another entity; and ⫺ tool classifiers, in which the hand represents a tool that is being manipulated;
8. Classifiers 4. 5.
Bodypart classifiers: parts of the body represent themselves (e.g., hands, eyes) or limbs (e.g., hands, feet); and A Body classifier: the body of the signer represents an animate entity.
This categorization is not only based on semantics (as in spoken language classifications), but also on different characteristics of the classifiers within each type (in contrast to studies on spoken language classifiers). Basically, SASSes classify referents with respect to their shape, Instrumental classifiers on the basis of their function as instruments/tools, and the Body classifier represents animate entities. In addition, SASSes and Instrumental classifiers are claimed to be morphologically complex, in contrast to Semantic classifiers, and Body classifiers are a special category because they cannot be combined with motion or location verbs, in contrast to classifiers of other types (e.g., Supalla 1982, 1986; Newport 1982; Schick 1990a). Since then similar as well as new categorizations have been suggested for ASL and a number of other sign languages (see, amongst others, McDonald (1982), Liddell/ Johnson (1987), and Benedicto/Brentari (2004) for ASL; Johnston (1989) and Schembri (2001, 2003) for Australian Sign Language (Auslan); Corazza (1990) for Italian Sign Language (LIS); Brennan (1990a,b) for British Sign Language (BSL); Hilzensauer/ Skant (2001) for Austrian Sign Language (ÖGS); and Fischer (2000) for Japanese Sign Language (NS)), and the categories have received various different terms. There is some overlap between them, which shows that the categorizations are problematic. This is important because the suggested categories have a large impact on the interpretation of classifiers and the structures in which they occur. Currently two main categories of classifiers are distinguished, called ‘Whole Entity classifiers’ and ‘Handling classifiers’. The first category contains classifiers that directly represent referents, by denoting particular semantic and/or shape features. By and large, this category comprises Supalla’s Semantic classifiers, static SASSes, some Bodypart classifiers, and Tool classifiers. In the category of Handling classifiers we find classifiers that represent entities that are being held and/or moved; often (but not exclusively) by a human agent. This category contains classifiers that were previously categorized as Instrumental classifiers and some Bodypart classifiers. Examples of Whole Entity classifiers (WECL) and Handling classifiers (HCL) from TİD and DGS, are shown in (2) and (3), where the manual articulator represents a flattish entity (a book) and a cylindrical entity (a mug), respectively. In (2a) and (3a), Whole Entity classifiers are used for these entities ⫺ the hands directly represent the
161
162
II. Morphology
entities; Handling classifiers are used for the same entities in (2b) and (3b), the hands indicating that the entities are held in the hand. The Body classifier category proposed by Supalla (1982, 1986), which consists of only one element (the only classifier that is not represented phonologically by a configuration of the manual articulator but by the signer’s body), is currently no longer considered a classifier by most researchers but a means for referential shift (e.g., Engberg-Pedersen 1995; Morgan/Woll 2003; see also chapter 17 on utterance reports and constructed action). Although some researchers still count the category of tracing SASSes (viz. the subset of elements that consist of a tracing movement and a manual articulator, see (4)) among the classifiers, these differ in various aspects from all other classifiers. In contrast to other classifiers, tracing SASSes (i) are not expressed by a mere hand configuration, they also need the tracing movement to indicate the shape of the referent; (ii) they cannot be combined with verbs of motion; (iii) they denote specific shape information (in fact all kinds of shapes can be outlined, from square to star-shaped to Italyshaped); and, most importantly, (iv) they can be used in a variety of syntactic contexts: they appear as nouns, adjectives, and (ad)verbs, and do not seem to be used anaphorically (as will be exemplified in the next section). For these reasons, tracing SASSes are better placed outside the domain of classifiers. Thus, ASL and most other sign languages researched to date can be argued to have two main categories of classifiers: Whole Entity classifiers and Handling classifiers. This categorization is not exactly based on the semantics of the units, but rather on their function in the grammar, which will be discussed in more detail in section 4. Evidence from syntax and discourse will be given to sustain the necessity to distinguish these two types.
8. Classifiers
2.2. Classifiers: forms, denotation, and variation Entities are categorized according to semantic dimensions, as in spoken languages. Material (viz. animacy) and shape appear to be outstanding in all the sign languages that have classifiers. As for Whole Entity classifiers, most sign languages appear to have separate classifiers for animate entities, although the forms of the classifiers may differ. There is a @ -form (e.g., in ASL, NGT, DGS, DSL, and Auslan), and a %-form has been reported in e.g., HKSL, Taiwan Sign Language, and Thai Sign Language. Some languages also have a 0 -form for animate entities (e.g. DSL). Many sign languages have a classifier for legged entities (including humans and animals); represented by a -form (a variant is the form with bent fingers , mostly used for animals). Some languages have a special classifier for vehicles, viz. ASL ( ), LiU ( ). However, some of the classifiers mentioned here may not be restricted to a particular class, for example vehicles, but may also include other types of entities, e.g. the vehicle classifier reported in some languages (*) may also include wide, flattish entities. Many sign languages have a special classifier for airplanes ( or ) and trees (< or plus lower arm). Most sign languages have rather extensive sets of classifiers denoting shapes: long and thin, solid, round (of various sizes), flat, cylindrical, bulky, tiny ⫺ and some even have a classifier for square entities (e.g., TİD; see (1b)). All these shape-denoting classifiers are formed by varied numbers of extended, spread and/or bent fingers. Some researchers (such as Supalla 1982, 1986; Newport 1982; Schick 1990a,b) assume that these classifiers are themselves morphologically complex; each finger forms a separate morpheme. Some sign languages are reported to have default or general classifiers (e.g., a form where the tip of the index finger is important) that do not denote any characteristic of an entity, or a flat form (*) (e.g., NGT, ASL, and HKSL). Examples of classifiers from various sign languages were shown in (1)⫺(3). Few classifier inventories are available; many available classifier studies focus on explanations of the denotations and properties of the classifiers and use a subset of the classifier forms to illustrate these. It is therefore not quite possible to indicate the variety and the extent of the sets of classifiers in various sign languages. What becomes clear from the literature is that signers in most sign languages can use more than one classifier to represent a particular entity, in order to focus on a particular (different) characteristic of that entity (or to defocus it). For instance, a person can be represented with a classifier for animate entities, but a legs classifier will be used when the focus is on a person standing, or on the manner of locomotion (walking, sliding). A plate or a CD can be represented by a flat form (*), but also by a round form (J). A car can be represented by a specific vehicle classifier in some sign languages, but signers may also choose to use a flat form (*), for example when indicating that there is something on top of the car (by placing another classifier on top of the classifier representing the car). The sets of Handling classifiers in the various languages seem so far to be quite similar, although full inventories of these classifiers are not often provided. The form of these classifiers indicates the shape of an entity by the way in which it is held, e.g. thin or tiny entities are often represented by a M -form, long and thin entities as well as entities that are held by a kind of handle use a -form. Cylindrical entities are held with a :-form, flattish entities are held with a -form, thicker ones with a -form, and bulkier entities with one or two -forms. A signer can choose to use a special
163
164
II. Morphology form when the entity is held in a different way than normal, e.g. because handling needs (more) force or the signer indicates that an entity requires controlled or delicate handling, as when it is fragile or filthy. Although the manual articulator usually represents the hand of a human agent holding an entity, in some cases the manipulator is not a human agent, but, for example, a hook or a grabber. It is possible to indicate the shape of such manipulators, too (in this instance by a - and a = -form, respectively). Thus, many sign languages share sets of classifier forms, but there are also languagespecific forms. In Whole Entity classifiers these forms often denote material and shape characteristics. In both classifier categories, some variation in the choice of a classifier is possible, which serves to focus on particular aspects of the referent.
3. Classifier verbs For a good understanding, linguistic elements need to be investigated in linguistic contexts. Classifiers in sign languages often occur in combination with verbs, specifically verbs that indicate (i) a referent’s motion through space, a change of posture, and its location or existence somewhere in space, and (ii) the handling of referents (Supalla 1982, 1986; Schembri 2001; Engberg-Pedersen 1993; Wallin 1996, 2000; Tang 2003; and many others). These, and particularly the first type of verbs, have been the focus of most of the research on classifiers in sign languages. Verb-classifier combinations bear a variety of terms in the literature (such as spatial-locative predicates, polymorphemic predicates/verbs, productive signs, highly iconic structures, i.e. transfers of situation, to mention a few). The terms used often reflect a particular view on the structure of these combinations. In this chapter, they will be referred to as ‘classifier verbs’. Studies vary with respect to what they consider as classifier verbs. For example, verbs of geometrical description (or tracing SASSes) that are made at particular locations in space are sometimes counted among the classifier verbs; sometimes verbs expressing the manner of locomotion are included, and some studies do not restrict the occurrence of classifiers to motion verbs but also include other verbs in which the manual articulator is meaningful. Different analyses of classifiers and classifier verbs result. We will focus here on verbs that express a directed motion of a referent through space, a change of posture of a referent, the localization of a referent in sign space, and the existence of a referent at a location in sign space, for both Whole Entity and Handling classifiers. Let us look at a typical example of classifier verbs in context from ASL in (5) (from Emmorey 2002, 87): In such structures, a referent is initially introduced by a noun, then followed by a verb with a classifier representing the referent of the noun (signs 1 and 3 introduce a referent and signs 2 and 4 contain classifier verbs). If more than one referent is represented in space, the bigger/backgrounded entity is introduced first (the ‘Ground’ in the literature on language and space, e.g., Talmy 1985), and then the smaller entity, which is in the focus of attention (the ‘Figure’). The simultaneous representation of the referents in a classifier construction, the particular positioning of which expresses the spatial relation between the referents, is reported to be obligatory in some sign languages (see Supalla 1982; Perniss 2007; Morgan/Woll 2008; Chang/Su/Tai 2005; and Tang/Sze/Lam 2007). In the following sections, we will focus on the structure of classifier verbs.
8. Classifiers
3.1. The matter of morphological complexity of classifier verbs The morphological structure of classifier verbs is rather underinvestigated, which is surprising in view of the fact that sign languages are generally claimed to have complex morphology, and classifier verb formation is considered a very productive process. Supalla’s (1982, 1986) work gives an extensive morphological analysis of classifier verbs. A classifier verb, in his view, is one (or a combination) of a small subset of verb roots, which can be combined with large numbers of affixes. The most prominent of these affixes is the classifier, that he considers an agreement marker for a noun argument of the verb root. Some classifiers are morphologically complex. They can be combined with orientation affixes as well as affixes indicating how the referent is affected (e.g., ‘wrecked’ or ‘broken’). The verb root can, furthermore, be combined with various manner and placement affixes. In Supalla’s analysis (and in others to follow), sign parameters that in other signs are considered mere phoneme values are morphemic as well as phonemic. Unfortunately, no complex signs with complete morphological analysis are provided in Supalla’s work, nor are considerations given as to why particular parts of signs have a particular morphological status rather than another (or are not morphemic at all). Supalla’s analysis has been criticized as being too complex, since he considers every aspect of the signs under discussion that might contribute meaning to the whole as morphemic. As a result, the suggested morphological structure is huge in view of the fact that classifier verbs enhance multiple aspects of motion and location events, especially in comparison to spoken languages (even spoken languages that are renowned for their morphological complexity). Liddell (2003, 204⫺206) attempts to give a morphological analysis of a two-handed classifier construction (glossed as person1-walkto-person2) based on the morphemes suggested by Supalla and counts four roots and minimally 14 and maximally 24 affixes in this sign. This shows that Supalla’s morphological analysis of these verbs is indeed extremely complex, but also that it is not detailed enough since the morpheme status of ten aspects in this particular sign is not clear. One can, therefore, wonder whether too much morphology was assumed and whether some aspects of these structures can be accounted for without necessarily assigning them morphological value. Nevertheless, at least parts of Supalla’s analysis hold valid for many researchers: it is generally assumed that at least the movements/ locations and the manual articulator are meaningful. The analyses of the morphological
165
166
II. Morphology structure of such verbs differ, however. Liddell (2003), for example, presents the view that although the articulator and movement may be morphemes in such verbs, the process by which the verbs are formed is not very productive, and in many verbs that, at first sight, contain meaningful manual articulators and meaningful movements, these sign parts behave idiosyncratically and are not productively combined with other sign parts to form new structures. McDonald (1982) and Engberg-Pedersen (1993) observe that the interpretation of classifier verbs seems to be in part dependent on the classifier that is used. Engberg-Pedersen (1993) furthermore points out that particular movements do not combine well with particular classifiers and suggests that the classifier is the core element in these structures rather than the movement (although no further claims are made with respect to the morphological status or structure of the verbs). Slobin et al. (2003) suggest that classifier verbs may be similar to bipartite verb stems in spoken languages (e.g., Klamath; Delancey 1999), in which the contribution of classifier and movement (and other) components is of equal importance in the complex verb. Many studies, however, merely indicate that the classifier and the movement are morphemes, although it is generally assumed that other aspects of the classifier verb that convey information about the event (such as manner of locomotion and locations) are (or at least can be) expressed by morphemes. More detailed discussion of the structure of the sign is usually not given. Still, all studies agree that these constructions are verbs, referring to an event or state in the real world. It is recognized in most investigations that there is an anaphoric relation between the classifier and the referent that is involved in the event. As stated in the previous section, the referent is usually introduced before the classifier verb is sign, although in some cases the referent is clear from the (previous or physical) context and need not be mentioned. After introduction of the referent, it can be left unexpressed in the further discourse (e.g. in narratives) since the classifier on the verb suffices to track the referent involved. The relation is deemed systematic. Supalla (1982) and some of the subsequent researches (e.g., Benedicto/Brentari 2004; Chang/Su/Tai 2005; Cuxac 2003; Glück/Pfau 1998, 1999; Zwitserlood 2003, 2008), consider the classifier an agreement marker or a proform for the referent on the verb. In these accounts, the movement (or localization) in the sign is considered a verb root or stem, and the classifier as well as the locus in space as functional elements (i.e. inflectional affixes). These views will be discussed in more detail in the next section.
3.2. Verb roots, (in)transitivity, and the classifier category As was stated in section 2, researchers generally distinguish two main categories of classifiers: Whole Entity classifiers and Handling classifiers. The first are seen in verbs that express a motion of a referent, its localization in space, or its existence in space. In these verbs, the classifiers represent the referent directly. Handling classifiers, in contrast, occur with verbs that show the manipulated motion or the holding of a referent. The contrast between the two has already been shown in (2) and (3), and is further illustrated in (6), from DGS. The signer uses two verbs with Whole Entity classifiers ( , in signs 13 and 15) and two verbs with Handling classifiers ( , in signs 8 and 14), each classifier representing the old woman. When he uses the verbs with Whole Entity classifiers, he describes an
8. Classifiers
independent motion of the woman, who wants to move up, onto the bus, and the Handling classifiers are used for a manipulated motion of the old woman by a human agent (the man). There is a close connection between the category of classifier and the transitivity of the verb: Whole Entity classifiers occur with intransitive verbs, whereas Handling classifiers are used with transitive verbs (in chapter 19, the use of classifier types is discussed in connection with signer’s perspective; see also Perniss 2007). Following Supalla (1982), Glück and Pfau (1998, 1999), Zwitserlood (2003), and Benedicto and Brentari (2004), consider the classifier in these verbs as a functional element: an agreement marker, which functions in addition to agreement by use of loci in sign space (see chapters 7 and 10 for details on agreement marking by loci in sign space). Benedicto and Brentari (2004) furthermore claim that the classifier that is attached to the verb is also responsible for its (in)transitivity: a Handling Classifier turns a (basically intransitive) verb into a transitive verb. The analysis of classifiers as agreement markers is not uncontroversial. Counterarguments are given by observations that classifiers are not obligatory (as they should be if they were agreement markers), and that there is variability in the choice of a classifier (as discussed in section 2.2), which should not be possible if classifiers were agreement markers. These arguments, however, are not valid. First, marking of agreement is not obligatory in many languages in the world that can have agreement mark-
167
168
II. Morphology ing (Corbett 2006). Second, and connected to the first point, the fact that classifiers do not occur with verbs other than verbs of motion and location verbs may have phonological/articulatory reasons: it is not possible to add a morpheme expressed by a particular configuration of the manual articulator to a verb that already has phonological features for that articulator. This is only possible with verbs that have no phonological specification for the manual articulator, i.e. motion and location verbs (in the same vein it is argued that many plain verbs cannot show agreement by loci in sign space because they are body anchored (i.e. phonologically specified for a location); see also chapter 7 on agreement). Finally, variability in the choice of a classifier is, in part, the result of the verb’s valence: a different classifier will be combined with an intransitive and a transitive verb: Whole Entity classifiers appear on intransitive verbs, and transitive ones will be combined with Handling classifiers. Also, some variability in choice of agreement markers is also observed in other (spoken) languages. This issue, however, is still under debate.
3.3. The phonological representation of the morphemes in classifier verbs Classifiers in sign languages are often described as bound morphemes, i.e. affixes (see, among others, Supalla 1982; Meir 2001; Tang 2003; Zwitserlood 2003). They are generally considered to be expressed by a particular shape of the manual articulator, possibly combined with orientation features. Classifiers thus lack phonological features for place of articulation and/or movement. It may be partly for this reason that they are bound. Researchers differ with respect to their phonological analysis of the verbs with which classifiers occur. In some accounts (e.g., Meir 2001; Zwitserlood 2003, 2008), classifier verbs contain a root that only has phonological specifications for movement (or location) features, not for the manual articulator. Classifier verb roots and classifiers, then, complement each other in phonological specification, and for this reason simultaneous combination of a root and a classifier is always possible. In other accounts (e.g., Glück/Pfau 1998, 1999), verbs are assumed to be phonologically specified for movement and handshape features. The affixation of a classifier triggers a phonological readjustment rule for handshape features, which results in a modification of the verbal stem. Some attention has been given to the apparent violations of well-formedness constraints that classifier verbs can give rise to (e.g., Aronoff et al. 2003, 70f). It has also been observed that classifier verbs are mostly monosyllabic. However, apart from Benedicto and Brentari (2004), there have been no accounts of phonological feature specifications of classifiers and classifier verbs; in general classifiers are referred to as ‘handshapes’. Recent phonological models (e.g., Brentari 1998; van der Kooij 2002) as well as new work on phonology may be extended to include classifier verbs. To sum up, there are a few studies with argued suggestions for a (partial) morphological structure of classifier verbs. In general, these signs are considered as verb roots or verb stems that are combined with other material; classifiers are argued to be separate morphemes, although the status of these morphemes is still a debated issue. They
8. Classifiers are not specified, or claimed to be roots or affixes (e.g., agreement markers). Handling classifiers occur in transitive classifier verbs, where the classifier represents a referent that is being held/manipulated (as well as a referent that holds/manipulates the other referent); Whole Entity classifiers, in contrast, occur in intransitive verbs and represent referents that move independently of manipulation or simply exist at particular locations in sign space. Phonological representation of classifier verbs in sign languages has received little attention to date.
4. Classifiers in signs other than classifier verbs Not only do classifier verbs contain meaningful manual articulators; they are also encountered in other signs. Some examples from NGT are shown in (7), in which we recognize the hand configuration representing long and thin entities, i.e. knitting needles, legs, rockets, and thermometers (@), and a hand configuration often used in NGT for manipulation of long and/or thin entities (with control), such as keys, fishing rods, toothbrushes, and curtains ( ):
There are different views of the structure of such signs, as explained below: some researchers consider them monomorphemic, while others claim that they are morphologically complex. These views are discussed in the next section.
4.1. Complex or monomorphemic signs? Traditionally, signs in which the manual articulator (and other parameters) are meaningful, but which are not classifier verbs, are called ‘frozen’ signs. This term can be
169
170
II. Morphology interpreted widely, for example as ‘signs that are monomorphemic’, ‘signs that one may find in a dictionary’, and ‘signs that may be morphologically complex but are idiosyncratic in meaning and structure’. Most researchers adhere to the view that these signs originate from classifier verbs that have been formed according to productive sign formation processes, and that have undergone a process of lexicalization (e.g., Supalla 1980; Engberg-Pedersen 1993; Aronoff et al. 2003), i.e. the interpretation of the sign has become more general than the classifier verb, and the hand configuration, location, and movement parts no longer have distinct meanings, and therefore can no longer be interchanged with other parts without radically changing the meaning of the whole sign (in contrast to classifier verbs). Often the signs do not express (motion or location) events any more, in contrast to classifier verbs (e.g., Supalla 1980; Newport 1982), they obey particular phonological restrictions that can be violated by classifier verbs, and they can undergo various morphological processes that are not applicable to classifier verbs, such as affixation of aspectual markers (Sandler/Lillo-Martin 2006; Wilbur 2008) and noun derivation affixes (Brentari/Padden 2001). There are also studies claiming that many such signs are not (fully) ‘frozen’, but, on the contrary, morphologically complex. In some studies it is implied that sign language users are aware of the meaningfulness of parts of such signs, such as the handshape (Brentari/Goldsmith 1993; Cuxac 2003; Grote/Linz 2004; Tang/Sze/Lam 2007; Sandler/Lillo-Martin 2006). Some researchers suggest that such signs are actually the result of productive processes of sign formation (e.g., Kegl/Schley 1986; Brennan 1990a,b; Johnston/Schembri 1999; Zeshan 2003; Zwitserlood 2003, 2008). Signers of various sign languages are reported to coin new signs on the spot when they need them, for instance when the language does not have a conventional sign for the concept they want to express or when they cannot remember the sign for a particular concept, and these signs are usually readily understood by their discourse partners. Some of these newly coined signs are accepted in the language community and become conventionalized. This does not necessarily mean that they started out as productively formed classifier constructions that are lexicalized in the conventionalization process (lexicalization in this context meaning: undergoing (severe) phonological, morphological, and semantic bleaching). Even though lexicalization as well as grammaticalization processes take place in all languages and sign languages are no exception, sign languages are relatively young (see chapter 34 on lexicalization and grammaticalization). In addition to the fact that there may be other sign formation processes besides classifier verb formation involved, it is not very plausible that diachronic lexicalization processes have taken place at such a large scale as to result in the large numbers of signs in which meaningful hand configurations occur (as well as other meaningful components) in many sign languages, especially in the younger ones. Besides this, it has not been possible to systematically verify the claim of diachronic lexicalization of signs for most sign languages because of a lack of well-documented historic sources. Some phonological studies have recognized that the ‘frozen’ lexicon of sign languages contains many signs that may be morphologically complex. These studies recognize relations between form and meaning of signs and sign parts, but lack morphological accounts to which their phonological descriptions may be connected (Boyes Braem 1981; Taub 2001; van der Kooij 2002; see also chapter 18 for discussion of iconicity).
8. Classifiers
4.2. The structure of ‘frozen’ signs A few studies discuss the structure of ‘frozen’ signs; these are briefly sketched below (see chapter 5 for a variety of other morphological processes in sign languages). Brennan’s (1990a,b) work on sign formation in BSL is comprehensive and aims at the denotation of productively formed signs, i.e. the characteristic(s) of an entity or event that are denoted in such signs and the way in which this is done, especially focusing on the relation of form and movement of the manual articulator on the one hand and aspects of entities and events on the other. Although Brennan indicates that sign parts such as (changes of) hand configurations, movements, and locations are morphemes, she does not provide morphological analyses of the signs in which they appear. She roughly states that they are kinds of compounds, and distinguishes two types: simultaneous compounds and ‘mix ‘n’ match’ signs. Brennan argues that simultaneous compounds are blends of two individual signs (many of which contain classifiers), each of which necessarily drops one or more of its phonological features in the compounding process, in order for the compound to be pronounceable. Mix ‘n’ match signs are combinations of classifiers, symbolic locations, and meaningful non-manual components. According to Brennan, the meaning of both types of sign is not always fully decomposable. Meir (2001) argues that Israeli Sign Language (Israeli SL) has a group of noun roots (also called ‘Instrumental classifiers’) ⫺ free morphemes that are fully specified for phonological features, and that can undergo a lexical process of Noun Incorporation into verbs. This process is subject to the restriction that the phonological features of noun root and verb do not conflict. The output of this process is a compound. Examples of such compounds are the signs glossed as spoon-feed, fork-eat, needlesew, and scissors-cut. According to Meir, the differences between the processes and outputs of Noun Incorporation and classifier verb formation are the following: (i) the former are combinations of free morphemes (verb and noun roots) whereas the latter are combinations of verbs and affixes; (ii) combinations of classifier verbs and classifiers are always possible because their phonological features never conflict, whereas Noun Incorporation is blocked if the phonological features of the verb and noun root conflict; (iii) in the compounding process, the incorporated Noun root constitutes a syntactic argument, which cannot be expressed with a separate noun phrase in the sentence after incorporation, whereas after classifier verb formation, both the classifier representing a referent and the noun referring to that referent can be present in the sentence. An analysis that is reminiscent of Brennan’s (1990a,b) and Meir’s (2001) work is provided in Zwitserlood (2003, 2008) for NGT. There it is argued that all manual sign parameters (handshape, orientation, movement, and location) can be morphemic (as in Brennan 1990a,b). All these morphemes are considered roots that are phonologically underspecified (in contrast to Meir’s (2001) view) and that can combine into complex signs called ‘root compounds’. Zwitserlood argues that the roots in these compounds do not have a grammatical category. The signs resulting from combinations of these roots are morphologically headless and have no grammatical category at first instance. The grammatical category is added in syntax, after the sign has been formed. In this view, the differences between root compounds and classifier verbs, and the processes by which they are formed are the following: (i) the former is a lexical (com-
171
172
II. Morphology pounding) process; the latter a grammatical (inflectional) process; (ii) classifier verbs consist of only one root that is phonologically specified for a movement. This root is assigned the grammatical category of verb in syntax, after which various affixes, such as the classifier (which is considered an agreement marker), are added. Root compounds, in contrast, contain more than one root, one of which may be a classifier, and they can be assigned different grammatical categories; (iii) the classifier in a classifier verb is always related to a syntactic argument of the verb, i.e. the Theme (moving) argument; the classifier in root compounds is not systematically related to a syntactic argument (in case the root compound is a verb); and (iv) whereas intransitive classifier verbs combine with Whole Entity classifiers and transitive ones with Handling classifiers in classifier verbs, a classifier in a verbal root compound is not connected with the verb’s valence. Zwitserlood’s account shows similarities to Brennan’s work and shares some ideas with Meir’s analysis. It is also somewhat reminiscent of the idea of bipartite (or rather, multipartite) stems suggested by Slobin et al. (2003), with the difference that the root compounding process is not restricted to verbs. To summarize, although in most sign languages classifiers are recognized in many signs that are not classifier verbs, the morphological structure of these signs has been investigated only rarely to date. This is largely due to the fact that these signs are reminiscent of classifier verbs while they do not show the patterns and characteristics observed in constructions with classifier verbs. As a result, the signs in question are generally taken to be lexicalized forms without internal morphology. The literature contains a few studies that recognize the fact that classifiers as well as other sign parameters are used systematically and productively in new sign formation in many sign languages and that some of the signs thus formed enter the established lexicon (see also Johnston/Schembri 1999). Signers also appear to be sensitive to the meaningful elements within the signs. The general assumption that these signs are monomorphemic may be partly due to the gloss tradition in sign language research, where signs are labeled with a word or word combination from the local spoken language and/or English that often does not match the internal structure of the signs. Unintentionally, researchers may be influenced by the gloss and overlook sign-internal structure (see Hoiting/Slobin 2002; Zwitserlood 2003). There are several accounts of sign-internal morphology (e.g., Padden/Perlmutter 1987; Fernald/Napoli 2000; Frishberg/Gough 2000; Wilbur 2008; as well as others mentioned in this section) along the lines of which more morphological studies of signs and new sign coinage can be done. Also, psycholinguistic studies of sign processing are important in showing awareness of morphological structure in users of sign languages.
5. The acquisition of classifiers in sign languages Chapter 28 of this volume gives a general overview of sign language acquisition. In addition, this section will focus particularly on research into the acquisition of classifier structures by Deaf children. Many of these studies concentrate on production of classifiers by Deaf children, i.e. on the age and the order in which they acquire the different classifiers in their target language. Mostly elicitation tasks are used (e.g., Supalla 1982; Kantor 1980; Schick 1990b; Fish et al. 2003). In a few studies, the movements within the classifier verbs are also taken into account (e.g., Newport 1988; Tang/Sze/Lam 2007).
8. Classifiers The children in these studies are generally aged three years and older, and the tasks are often designed to elicit Whole Entity classifiers (including SASSes), although studies by Schick (1990b) and Slobin et al. (2003) also look at Handling classifiers. All studies are cross-sectional.
5.1. Production studies The general results of the production studies are that the youngest children initially use different strategies in expressing the events presented in the stimuli. They use lexical verbs of motion as well as classifier verbs, and sometimes they do not use a verb at all. Older children use more classifier verbs than younger children. Although the classifiers used by these children are often quite iconic, children initially do not seem to make use of the possibility of iconic mapping that most sign languages offer between motion events and spatial situations in real life on the one hand, and the use of space and iconic classifier forms on the other (but see Slobin et al. (2003) for arguments for iconic mapping in spontaneous (possibly gestural) utterances by children between one and four years of age). As for the movements within the verbs, children seem to represent complex path movements sequentially rather than simultaneously, unlike adults (Supalla 1982; Newport 1988). Young children often use a general classifier instead of a more specific one or a classifier that is easier to articulate than the target classifier (e.g., < instead of the -form representing vehicles in ASL). Nevertheless, target classifiers that are considered motorically simple are not always acquired earlier than those that are more complex (note that it is not always clear which handshapes are simple and which are complex). In many cases where the spatial scene to be described contains a Figure and a Ground object, children do not represent the Ground referent simultaneously with the Figure referent, while in some cases in which the Ground referent is present, it is not appropriate (e.g., the scale between the Ground and the Figure referents is not felicitous). The correct use of classifiers is not mastered before eight to nine years of age. The conclusions of the studies are not unequivocal. In some studies (even studies of acquisition of the same target language) the children appear to have acquired a particular classifier earlier than in others, or a particular classifier category has been acquired earlier than stated in another study (e.g., Tang/Sze/Lam 2003). Many researchers indicate that young children rarely use complex classifier constructions, i.e. constructions in which each hand represents a different entity. Studies that discuss errors that are made by the children provide an interesting outlook on their development, for example apparent overgeneralization of morphological structure in lexical signs (e.g., Bernardino 2006; Tang/Sze/Lam 2007).
5.2. Comprehension studies Few comprehension studies of acquisition of classifier constructions in sign languages have been undertaken to date. The existing studies focus on comprehension of the motions and (relative) locations of referents in (intransitive) classifier verbs, rather
173
174
II. Morphology than on the classifier handshapes. For BSL, Morgan et al. (2008) conclude that verbs containing path movements are better and earlier understood than those containing localizations, and that both movements and localizations are not yet mastered at five years of age. Martin and Sera (2006) report that comprehension of locative relations between referents (both static and dynamic) is still not fully acquired by children learning ASL at nine years of age.
5.3. Interpretation of the results Because of the different approaches, the studies cannot easily be compared, and interpretation of the results of the available acquisition studies is rather difficult. More importantly, the results are somewhat obscured by the different assumptions about the structures under research which underlie the designs and scorings. For example, although the term ‘SASS’ is used in several studies, what the term covers is not described in detail; therefore its interpretation may differ in these studies. Also, from descriptions of test items it appears that these may involve classifier verbs as well as verbs that do not express a motion or location of a referent (such as signs for looking and cutting). One of the most important issues in this respect is the fact that in most studies vital information is missing about the targets of the test items. Thus, it is often unclear how these were set and how the children’s data were scored with respect to them. Since adult language is the target for the children acquiring the language, language use and comprehension of adults should be the target in acquisition tests. It can be seen in a few studies (e.g., Fish et al. 2003) that the children’s classifier choices for referents show variation, some of which indicates a particular focus on the referent. However, it is not clear how this is related to adult variation on these test items. For instance, Martin and Sera (2006) compared comprehension of spatial relations by children acquiring ASL and children acquiring English, in which the children’s scores were also compared to adult scores on the same test items (in ASL and English). As expected, the English-speaking adults scored 99 % correct. However, the ASL using adults had a mere 78 % mean correct score. Apparently, in this case the test targets were not the adult patterns, and it is unclear, therefore, what patterns were selected as targets. This also holds for most other classifier acquisition studies.
5.4. Summary Research shows that the acquisition of classifier constructions in sign languages is a very complex task, in which the child makes little use of the iconic mapping between event and linguistic representation. Correct use of classifier verbs is not fully acquired until children are in their early teens. Further research with broader scope, taking context, different strategies, and variation in the choice of classifier into account and clearly relating the results to adult comprehension and performance is necessary to shed more light on the acquisition of these constructions.
8. Classifiers
6. Classifiers in spoken and sign languages: a comparison 6.1. Overview of recent research on spoken language classifiers Research into classifiers in spoken languages began well in the 1970s. It became clear that there are different classifier systems in the world’s languages. As stated in section 2, early study of sign language classifiers was much influenced by the then available literature on spoken language classifiers. In an overview article by Allan (1977) languages with classifiers were distinguished into four types, one of which is a ‘predicate classifier language’ (e.g., Navajo). Classifiers in sign languages seemed to match this type, and similar structures in Navajo and ASL were used to exemplify this. However, the comparison does not hold on two points: first, Navajo is a language with classificatory verbs rather than classifier verbs, the difference being that in classifier verbs a separate verb stem and classifier can be distinguished, while in classificatory verbs the verb stem itself is responsible for classification of the referent involved in the event and no separate classifying morpheme can be discerned (Young/Morgan 1987; Aikhenvald 2000; Grinevald 2000). Second, and related to the previous point, the early comparisons between structures in Navajo and ASL were based on misinterpretation of the Navajo classificatory verbs (Engberg-Pedersen 1993; Schembri 2001; Zwitserlood 1996, 2003). Recent studies, particularly work by Aikhenvald (2000) and Grinevald (2000) give much more, and newer, information about classifiers in a variety of spoken languages, covering their semantics, pragmatics, function, and morphological realization. If we take as a premise that a classifier be a distinct morpheme, four major categories of classifiers can be distinguished (which are not quite the same as those suggested by Allan (1977)). These have the following characteristics: 1) Noun classifiers are free morphemes that occur within a noun phrase (more than one classifier may occur within the noun phrase). The noun classifiers’ semantics are often based on animacy and physical properties of the referent. The choice of a noun classifier is based on semantics and can vary, when a speaker focuses on different characteristics of the noun referent. Not all nouns in a language take a classifier. The sets of noun classifiers in different languages can vary from small (even two, e.g. in Emmi, Australia) to (very) large (several hundreds in Asian languages). These classifiers function as determiners but can also be used pronominally (in which case the NP does not contain a noun). 2) Numeral classifiers are free or bound morphemes that are obligatory in numeral and quantified noun phrases. They also occur occasionally with adjectives and demonstratives. The semantics of these classifiers includes animacy, social status, directionality, and physical and functional properties. The choice of a numeral classifier is predominantly semantic and some nouns have alternative choices of classifiers, depending on the property of the noun that is in focus. Every noun with a countable referent has a classifier, although there may be some abstract nouns that are not classified. The number of classifiers may range from few (e.g., 14 in Tashkent, Uzbek) to large numbers (e.g., an estimate of 200 in Thai and Burmese). Their main function is to individuate nouns (typically ‘concept’ or mass nouns in
175
176
II. Morphology the languages with this classifier system) in a quantificational environment, but they can also have an anaphoric function. 3) Genitive (or: possessive or relational) classifiers are bound morphemes that occur in noun phrases with possessive constructions. They generally refer to the semantic class of the possessed nouns. Not all nouns are categorized by a classifier; nouns that are classified often belong to a particular semantic group. The semantics concerns physical and functional properties, nature, and sometimes animacy. Some languages with a system of genitive classifiers have a ‘generic’ or ‘default’ classifier that can be used instead of more specific ones. This type of classifier can consist of independent words or affixes. The choice of a classifier is strictly semantic and the size of the classifier inventories is variable. The function of this type of classifier is the expression of possession. 4) Verbal classifiers are bound morphemes that are affixed to verbs and are linked to verb arguments (usually subjects or objects, but sometimes even peripheral arguments), in terms of their inherent properties. The semantics of these classifiers has a wide range, usually based on physical and functional properties, nature, directionality/orientation, quanta, and sometimes animacy. The number of classifiers ranges from several dozen (e.g., Terena, a language spoken in Brazil) to over one hundred (e.g., Mundurukú, a Tupi language of north central Brazil). Usually only a subset of verbs in a language takes a classifier. Not all nouns are classified, but a noun can have more than one classifier. The main function of this type of classifier is referent tracking. A note of caution is needed here: the characteristics of the classifier systems outlined above are generalizations, based on descriptions of (large) sets of data from languages that employ one or more of these classifier systems. There is, however, much variation within the systems. Also, some classifier systems have been well studied, whereas others, particularly verbal classifier systems, are still under-researched in comparison to other systems (such as numeral classifiers), which complicates a comparison between classifier systems in spoken and sign languages considerably.
6.2. A comparison between (verbal) classifiers in spoken and sign languages As stated in section 3, classifiers in sign languages typically occur on verbs. Thus, a comparison between sign and spoken languages should focus primarily on verbal classifiers. Classifiers in sign languages share a number of characteristics with verbal classifiers in spoken languages. In some characteristics, however, they differ. We will now focus on the main characteristics of classifiers in both modalities and discuss their similarities and differences. First, verbal classifiers are affixes attached to a verb stem (Aikhenvald 2000, 428; Grinevald 2000, 67). For example, in the Australian language Gunwinggu the classifier bo: (for liquid referents) is bound to the verb stem mangan (‘fall’) (Oates 1964, in Mithun 1986, 389):
8. Classifiers (8)
gugu ga- bo:mangan water it- cl:liquid- fall ‘Water is falling.’
177 [Gunwinggu]
Classifiers in sign languages are also considered as affixes by many researchers (e.g., Supalla 1982, 24; Sandler/Lillo-Martin 2006, 77), while others do not specify their morphological status. Second, verbal classifiers in spoken languages are linked to the subject or object argument of the verb to which they are affixed and they are used to maintain reference with the referent throughout a discourse (Aikhenvald 2000, 149). The verb determines which argument the classifier represents: the classifiers represent the subject in intransitive verbs and the object in transitive verbs. This is illustrated with the classifier nfor round entities in the North Athabaskan language Koyukon, which represents a rope. The rope is the subject of the intransitive verb in (9a) and the object of the transitive verb in (9b) (Thompson 1993, in Aikhenvald 2000, 168): (9)
aal’onh a. tl’ool nrope cl:round.thing- be.there ‘A rope is there.’ aan- s’onh b. tlool nrope cl:round.thing- pref- 1sg- arrive.carrying ‘I arrived carrying a rope.’
[Koyukon]
As we have seen in examples (5) and (6) in section 3, a signer can use a classifier after its referent has been introduced (or when it is clear from the context), to relate the referent’s motions through space, a change in its posture, or its existence and/or location in sign space. The classifier suffices to maintain the reference through long stretches of discourse, and thus no overt nouns are necessary (though they may they still occur, e.g. to re-establish reference). Thus, similarly to verbal classifiers in spoken languages, classifiers in sign languages function as referent tracking devices. Some researchers claim that classifiers represent verb arguments and function as agreement markers of the arguments on the verbs. A difference between the two modalities is that there are generally no separate classifiers for transitive and intransitive verbs in spoken languages, whereas such a difference is found in sign languages: Whole Entity classifiers appearing on intransitive verbs versus Handling classifiers that appear on transitive verbs. Third, although verbal classifiers in spoken languages have an anaphoric function, their use is not obligatory. They typically occur on a subset of a language’s verbs, and are sometimes used for special effects (e.g., stressing that a referent is completely involved in the event in Palikur (an Arawak language used at the mouth of the Amazon river), as stated by Aikhenvald (2000, 165)). This characteristic is rather difficult to compare with classifiers in sign languages. Apparently classifiers in sign languages only occur on a subset of verbs, but this may be a result of the articulatory possibilities of the manual-visual modality as described above in sections 3.3 and 4.2. Classifiers in sign languages can only co-occur with verbs that do not have phonological specifications for the manual articulator (usually verbs of motion and location), not on verbs that have inherent phonological specifications for the hand. It is interesting, though, that verbs
178
II. Morphology that take classifiers in spoken languages are also often motion verbs, positional verbs, verbs expressing the handling of an object, as well as verbs that describe physical properties of the referent. Whether or not sign language classifiers are obligatory on the subset of motion/location verbs is still a matter of debate. For example the fingertip that is sometimes used for localization of referents in space or for tracing the motion of a referent through space is regarded by some as a kind of ‘default’ classifier, used when a signer does not focus on any particular characteristic of the referent (see also section 2.2). In this view, it can be argued that verbs of motion that appear with this shape of the articulator have a classifier indeed, and that classifiers, thus, are obligatorily attached to these verbs. In other views, the finger(tip) is considered a (default) phonetic articulation, spelled out simply because the expression of the location or movement needs an articulator, or the finger(tip) handshape is considered as one of the phonological features of the verb, that undergoes a change when a classifier morpheme is added (e.g., Glück/Pfau 1998, 1999). More research is necessary for any of these views to prove correct. Fourth, verbal classifier systems (as well as other classifier systems) in spoken languages allow variability in the choice of a classifier. Thus a noun can be categorized with more than one classifier (this is sometimes called ‘reclassification’). The variability range is to some extent dependent on the size of the inventory of classifiers, and on the semantic range of the categorization. An example of this variability from Miraña (also called Bora; a Witotoan language spoken in Brazil, Peru, and Colombia) is shown below. In this instance, a more general classifier appears on the verb in (10a) and a classifier that focuses on the shape in (10b) (Seifart 2005, 80): (10)
a. kátX:βi´ -ni i: -ni pihhX -ko ´ fall -cl:inanimate dist -cl:inanimate fish.nmz -cl:pointed ‘It (inanimate) fell, that (pointed) fishing rod.’ b. kátX:βi´ -ko i: -ko pihhX -ko ´ fall -cl:pointed dist -cl:pointed fish.nmz -cl:pointed ‘It (pointed) fell, that (pointed) fishing rod.’
[Miraña]
As discussed in section 2, classifier variation is also possible in sign languages, both for Whole Entity and Handling classifiers. This variability has been one of the reasons for proposing other, and different, terms for these elements. Slobin et al. (2003) state that the term ‘classifier’ is in fact a misnomer, because choosing a particular form of the manual articulator is an act of indicating some property of the referent rather than of classifying the referent. This holds true not only for classifiers in sign, but also in spoken languages. Traditionally, the main function of these elements was considered categorization. However, recent work by among others Croft (1994), Aikhenvald (2000), and Grinevald (2000) shows that categorization is not the main function, but that it is necessary for the various primary functions of each classifier category (e.g., individuation for numeral classifiers, reference tracking for verbal classifiers). In this respect, then, classifiers in sign and spoken languages are rather similar, despite the by now infelicitous term. Example (10) also shows that the classifiers in Miraña do not only occur on the verb, but also on nouns and determiners. This is a frequent observation in spoken languages; languages with verbal classifiers often have multiple classifier systems. This is in contrast to sign languages, which only have verbal classifiers.
8. Classifiers
179
A further characteristic of spoken languages with verbal classifier systems is that not all nouns are classified. Even though it is becoming clear that classification does not so much concern nouns but rather entities it can still be stated that not all entities are represented by a classifier in spoken languages. As for sign languages, it has not been mentioned in the literature that there are entities that are not classified by a particular hand configuration. This does not imply that all entities in sign languages can be represented by a classifier. Studies so far have used narrative data, often elicited by pictures, stories, and movies, that feature (restricted sets of) concrete entities. It is possible that other, particularly more abstract, entities might not take classifiers, or that they may not be represented by classifiers since they are less likely to move through space or to enter spatial relations. On the other hand, it is just as plausible that abstract entities can be assigned particular characteristics, such as shape or animacy, and enter metaphoric spatial relations. For the moment the issue remains unresolved. Finally, we have seen that sign language classifiers do not only occur with motion and location verbs, but that they are also used in lexicogenesis (section 4), even though this issue still needs extensive research. It has been claimed (e.g., Engberg-Pedersen 1993; Schembri 2003) that this is not the case in spoken languages and that this is a point where sign and spoken language classifiers differ. However, classifiers in spoken languages can be used in word formation, too. This has not been focused on in the overview literature on classifiers, but is discussed in studies of particular spoken languages with classifier systems (e.g., Senft 2000; Seifart 2005; van der Voort 2004). The following examples from Miraña (Seifart 2005, 114) show that a noun root (X ´ hI ‘banana’) can be combined with one or more classifiers. Seifart states that such combinations are compounds. (11)
a. X ´ hı´ banana (general: fruit, plant, bunch, …) b. X -kó ´ hI banana -cl:pointed ‘banana plant’ c. X -kó -ʔámı` ´ hI banana -cl:pointed -cl:leaf ‘leaf of a banana plant’ d. X -ʔó ´ hI banana -cl:oblong ‘banana (fruit)’ e. X -ʔó -βí:X ´ hI ´ banana -cl:oblong -cl:chunk ‘chunk of a banana’
[Miraña]
Seifart (2005, 121) indicates that the meaning of the resulting compounds is not always componential and may even differ substantially from the combined meanings of the component parts. This has also been reported for signs that contain classifiers (e.g., Brennan 1990a,b; Johnston/Schembri 1999) and may be one of the grounds for the assumption that such signs are ‘frozen’. Apparently, verbal classifiers in sign and spoken languages are similar in this respect.
180
II. Morphology To summarize, recent findings in the spoken language literature on classifiers reveals that there are a number of similarities between verbal classifiers in spoken and sign languages, contrary to what has been claimed previously in the literature (e.g., Engberg-Pedersen 1993; Slobin et al. 2003; Schembri 2003). These similarities concern the main functions of classifiers: the lexical function of word/sign formation and the grammatical function of reference-tracking. Also, in both spoken and sign languages it is possible to choose a particular classifier in order to focus on a particular characteristic of an entity, although the entity may have a preferred classifier. A difference lies in the observation that sign languages only have verbal classifiers, whereas there are at least four different classifier systems in spoken languages and spoken languages may combine two or more of these systems (especially languages with a system of verbal classifiers). Comparison of some characteristics of verb classifiers in the different modalities remains unclear so far, e.g. questions such as whether there are referents that are not classified in sign languages, and whether the use of a classifier is optional, as in spoken language verbal classifier systems.
7. Conclusion Various aspects of classifiers in sign languages have been discussed in this chapter, and compared with classifiers in spoken languages. Although classifiers have been the focus of much attention in sign language research (much more than verbal classifiers in spoken languages), many unresolved issues remain. Also, because of this focus, the phenomenon of classifiers may have received a larger role in sign languages than it deserves. There seem to be particular expectations with respect to classifier verbs: since the process of classifier verb formation is considered productive, many more forms and greater use of these signs are expected than actually may occur (whereas another productive process of sign formation concerning classifiers as described in section 4 is rather neglected). Like speakers, signers have several means to express spatial relations between entities and the movements of entities through space; classifier verbs are only a subset of these. Users of sign languages have a range of devices at their disposal for the expression of existence, location, motion, and locomotion, as well as the shape and orientation of entities. These devices can be combined, but signers may also use only one of these devices, focusing on or defocusing a particular aspect of an event. Finally, most work on classifiers in sign languages is based on narrative data, much of which has been elicited by pictures, comics, and movies. Use of particular stimuli ascertained the presence of classifiers in the data and it is convenient for cross-linguistic comparison, but it also biases the resulting generalizations, and consequently the results of studies that are based on the results, such as acquisition studies and comparison with similar phenomena in spoken languages. Although many generalizations and claims have been made about classifiers and classifier constructions in sign languages, and theories have been formed on the basis of these generalizations (and vice versa), there is still much controversy in this field. It is necessary that the observations are verified by data of different genres, especially natural discourse, and obtained from large sets of users of (various) sign languages. Also, recent developments in other linguistic domains need to be taken into account.
8. Classifiers The results of such studies will give us a clearer view of the phenomenon, and provide a solid basis for research based on these results. Acknowledgements: I am indebted to my colleagues Aslı Özyürek, Pamela Perniss, and Connie de Vos for providing me with data from their TİD, DGS, and Kata Kolok corpora, and to them as well as to Adam Schembri and two reviewers for comments on earlier versions of this chapter. I am also grateful to my colleagues Yassine Nauta and Johan Ros for signing the NGT examples in (7). The construction of the DGS/ TİD corpora was made possible through a VIDI grant from the Dutch Science Foundation NWO. The construction of the Corpus NGT was funded by an investment grant from the same foundation.
8. Literature Adams, Karen 1986 Numeral Classifiers in Austroasiatic. In: Craig, Colette (ed.), Noun Classes and Categorization. Amsterdam: Benjamins, 241⫺262. Aikhenvald, Alexandra Y. 2000 Classifiers: A Typology of Noun Categorization Devices. Oxford: Oxford University Press. Allan, Keith 1977 Classifiers. In: Language 53, 285⫺311. Aronoff, Mark/Meir, Irit/Padden, Carol/Sandler, Wendy 2003 Classifier Constructions and Morphology in Two Sign Languages. In: Emmorey, Karen (ed.), Perspectives on Classifiers in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 53⫺84. Benedicto, Elena/Brentari, Diane 2004 Where Did All the Arguments Go? Argument-changing Properties of Classifiers in ASL. In: Natural Language and Linguistic Theory 22, 743⫺810. Bernardino, Elidea L. A. 2006 What Do Deaf Children Do When Classifiers Are Not Available? The Acquisition of Classifiers in Verbs of Motion and Verbs of Location in Brazilian Sign Language (LSB). PhD Dissertation, Graduate School of Arts and Sciences, Boston University. Boyes-Braem, Penny 1981 Features of the Handshape in American Sign Language. PhD Dissertation, Berkeley, University of California. Brennan, Mary 1990a Productive Morphology in British Sign Language. Focus on the Role of Metaphors. In: Prillwitz, Siegmund/Vollhaber, Tomas (eds.), Current Trends in European Sign Language Research. Proceedings of the 3rd European Congress on Sign Language Research. Hamburg, July 26⫺29, 1989. Hamburg: Signum, 205⫺228. Brennan, Mary 1990b Word Formation in British Sign Language. Stockholm: University of Stockholm. Brentari, Diane/Goldsmith, John 1993 Secondary Licensing and the Non-dominant Hand in ASL Phonology. In: Coulter, Geoffrey R. (ed.), Current Issues in ASL Phonology. New York: Academic Press, 19⫺41. Brentari, Diane 1998 A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press.
181
182
II. Morphology Brentari, Diane/Padden, Carol 2001 Native and Foreign Vocabulary in American Sign Language. In: Brentari, Diane (ed.), Foreign Vocabulary in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 87⫺119. Chang, Jung-hsing/Su, Shiou-fen/Tai, James H-Y. 2005 Classifier Predicates Reanalyzed, with Special Reference to Taiwan Sign Language. In: Language and Linguistics 6(2), 247⫺278. Corazza, Serena 1990 The Morphology of Classifier Handshapes in Italian Sign Language (LIS). In: Lucas, Ceil (ed.), Sign Language Research: Theoretical Issues. Washington, DC: Gallaudet University Press, 71⫺82. Croft, William 1994 Semantic Universals in Classifier Systems. In: Word 45, 145⫺171. Cuxac, Christian 2000 La Langue des Signes Française: les Voies de l’Iconicité. Paris: Ophrys. Cuxac, Christian 2003 Iconicité des Langues des Signes: Mode d’Emploi. In: Monneret, Philippe (ed.), Cahiers de Linguistique Analogique 1. A.B.E.L.L. Université de Bourgogne, 239⫺263. Cuxac, Christian/Sallandre, Marie-Anne 2007 Iconicity and Arbitrariness in French Sign Language ⫺ Highly Iconic Structures, Degenerated Iconicity and Diagrammatic Iconicity. In: Pizzuto, Elena/Pietrandrea, Paola/ Simone, Raffaele (eds.), Verbal and Signed Languages. Comparing Structures, Constructs and Methodologies. Berlin: Mouton de Gruyter, 13⫺33. Delancey, Scott 1999 Lexical Prefixes and the Bipartite Stem Construction in Klamath. In: International Journal of American Linguistics 65, 56⫺83. DeMatteo, Asa 1977 Visual Imagery and Visual Analogues in American Sign Language. In: Friedman, Lynn A. (ed.), On the Other Hand. New Perspectives on American Sign Language. New York: Academic Press, 109⫺137. Denny, J. Peter 1979 The ‘Extendedness’ Variable in Classifier Semantics: Universal Features and Cultural Variation. In: Mathiot, Madeleine (ed.), Ethnolinguistics: Boas, Sapir and Whorf Revisited. The Hague: Mouton Publishers, 97⫺119. Denny, J. Peter/Creider, Chet A. 1986 The Semantics of Noun Classes in Proto Bantu. In: Craig, Colette (ed.), Noun Classes and Categorization. Amsterdam: Benjamins, 217⫺239. Emmorey, Karen 2002 Language, Cognition, and the Brain. Insights from Sign Language Research. Mahwah, NJ: Lawrence Erlbaum. Engberg-Pedersen, Elisabeth 1993 Space in Danish Sign Language: The Semantics and Morphosyntax of the Use of Space in a Visual Language. Hamburg: Signum. Engberg-Pedersen, Elisabeth 1995 Point of View Expressed through Shifters. In: Emmorey, Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 133⫺154. Fernald, Theodore B./Napoli, Donna Jo 2000 Exploitation of Morphological Possibilities in Signed Languages: Comparison of American Sign Language with English. In: Sign Language & Linguistics 3(1), 3⫺58. Fischer, Susan D. 2000 Thumbs Up Versus Giving the Finger: Indexical Classifiers in NS and ASL. Paper Presented at the 7 th International Conference on Theoretical Issues in Sign Language Research (TISLR), Amsterdam.
8. Classifiers Fish, Sarah/Morén, Bruce/Hoffmeister, Robert/Schick, Brenda 2003 The Acquisition of Classifier Phonology in ASL by Deaf Children: Evidence from Descriptions of Objects in Specific Spatial Arrangements. In: Beachley, Barbara et al. (eds.), Proceedings of the Annual Boston University Conference on Language Development 27(1). Somerville, MA: Cascadilla Press, 252⫺263. Frishberg, Nancy/Gough, Bonnie 2000[1973] Morphology in American Sign Language. In: Sign Language & Linguistics 3(1), 103⫺131. Glück, Susanne/Pfau, Roland 1998 On Classifying Classification as a Class of Inflection in German Sign Language. In: Cambier-Langeveld, Tina/Lipták, Aniko/Redford, Michael (eds.), Proceedings of ConSole VI. Leiden: SOLE, 59⫺74. Glück, Susanne/Pfau, Roland 1999 A Distributed Morphology Account of Verbal Inflection in German Sign Language. In: Cambier-Langeveld, Tina/Lipták, Aniko/Redford, Michael/van der Torre, Eric Jan (eds.), Proceedings of ConSole VII. Leiden: SOLE, 65⫺80. Grinevald, Colette 2000 A Morphosyntactic Typology of Classifiers. In: Senft, Günter (ed.), Systems of Nominal Classification. Cambridge: Cambridge University Press, 50⫺92. Grote, Klaudia/Linz, Erika 2004 The Influence of Sign Language Iconicity on Semantic Conceptualization. In: Müller, Wolfgang G./Fischer, Olga (eds.), Inconity in Language and Literature 3. Amsterdam: Benjamins, 23⫺40. Hendriks, Bernadette 2008 Jordanian Sign Language: Aspects of Grammar from a Cross-linguistic Perspective. PhD Dissertation, University of Amsterdam. Utrecht: LOT. Hilzensauer, Marlene/Skant, Andrea 2001 Klassifikation in Gebärdensprachen. In: Leuninger, Helen/Wempe, Karin (eds.), Gebärdensprachlinguistik 2000 ⫺ Theorie und Anwendung. Hamburg: Signum, 91⫺111. Hoiting, Nini/Slobin, Dan 2002 Transcription as a Tool for Understanding: The Berkeley Transcription System for Sign Language Research (BTS). In: Morgan, Gary/Woll, Bencie (eds.), Directions in Sign Language Acquisition. Amsterdam: Benjamins, 55⫺76. Johnston, Trevor 1989 Auslan: The Sign Language of the Australian Deaf Community. PhD Dissertation, University of Sydney. Johnston, Trevor/Schembri, Adam 1999 On Defining Lexeme in a Signed Language. In: Sign Language & Linguistics 2(2), 115⫺185. Kantor, Rebecca 1980 The Acquisition of Classifiers in American Sign Language. In: Sign Language Studies 28, 193⫺208. Kegl, Judy A./Schley, Sarah 1986 When Is a Classifier No Longer a Classifier? In: Nikiforidou, V./Clay, M. Van/Niepokuj, M./Feder, D. (eds.), Proceedings of the 12 th Annual Meeting of the Berkeley Linguistic Society. Berkeley, CA: Berkeley Linguistics Society, 425⫺441. Kooij, Els van der 2002 Phonological Categories in Sign Language of the Netherlands. The Role of Phonetic Implementation and Iconicity. PhD Dissertation, Utrecht University. Utrecht: LOT. Liddell, Scott K./Johnson, Robert E. 1987 An Analysis of Spatial-Locative Predicates in American Sign Language. Paper Presented at the 4 th International Symposium on Sign Language Research, Lappeenranta, Finland.
183
184
II. Morphology Liddell, Scott K. 2003 Sources of Meaning in ASL Classifier Predicates. In: Emmorey, Karen (ed.), Perspectives on Classifiers in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 199⫺220. Mandel, Mark Alan 1977 Iconic Devices in American Sign Language. In: Friedman, Lynn A. (ed.), On the Other Hand. New Perspectives on American Sign Language. New York: Academic Press, 57⫺107. Martin, Amber Joy/Sera, Maria D. 2006 The Acquisition of Spatial Constructions in American Sign Language and English. In: Journal of Deaf Studies and Deaf Education 11(4), 391⫺402. McDonald, Betsy Hicks 1982 Aspects of the American Sign Language Predicate System. PhD Dissertation, University of Buffalo. Meir, Irit 2001 Verb Classifiers as Noun Incorporation in Israeli Sign Language. In: Booij, Gerard/ Marle, Jacob van (eds.), Yearbook of Morphology 1999. Dordrecht: Kluwer, 299⫺319. Mithun, Marianne 1986 The Convergence of Noun Classification Systems. In: Craig, Colette (ed.), Noun Classes and Categorization. Amsterdam: Benjamins, 379⫺397. Morgan, Gary/Woll, Bencie 2003 The Development of Reference Switching Encoded through Body Classifiers in British Sign Language. In: Emmorey, Karen (ed.), Perspectives on Classifiers in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 297⫺310. Morgan, Gary/Herman, Rosalind/Barriere, Isabelle/Woll, Bencie 2008 The Onset and Mastery of Spatial Language in Children Acquiring British Sign Language. In: Cognitive Development 23, 1⫺19. Newport, Elissa 1982 Task Specificity in Language Learning? Evidence from Speech Perception and American Sign Language. In: Wanner, Eric/Gleitman, Lila (eds.), Language Acquisition: the State of the Art. Cambridge: Cambridge University Press, 450⫺486. Newport, Elissa 1988 Constraints on Learning and Their Role in Language Acquisition: Studies of the Acquisition of American Sign Language. In: Language Sciences 10, 147⫺172. Nyst, Victoria 2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, University of Amsterdam. Utrecht: LOT. Padden, Carol A./Perlmutter, David M. 1987 American Sign Language and the Architecture of Phonological Theory. In: Natural Language and Linguistic Theory 5, 335⫺375. Perniss, Pamela 2007 Space and Iconicity in German Sign Language (DGS). PhD Dissertation, University of Nijmegen. Nijmegen: MPI Series in Psycholinguistics. Rosen, Sara Thomas 1989 Two Types of Noun Incorporation: A Lexical Analysis. In: Language 65, 294⫺317. Sandler, Wendy/Lillo-Martin, Diane 2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press. Schembri, Adam 2001 Issues in the Analysis of Polycomponential Verbs in Australian Sign Language (Auslan). PhD Dissertation, University of Sydney. Schembri, Adam 2003 Rethinking ‘Classifiers’ in Signed Languages. In: Emmorey, Karen (ed.), Perspectives on Classifiers in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 3⫺34.
8. Classifiers Senft, Günther 2000 What Do We Really Know About Nominal Classification Systems? In: Senft, Günther (ed.), Nominal Classification Systems. Cambridge: Cambridge University Press, 11⫺49. Schick, Brenda 1990a Classifier Predicates in American Sign Language. In: International Journal of Sign Linguistics 1, 15⫺40. Schick, Brenda 1990b The Effects of Morphosyntactic Structure on the Acquisition of Classifier Predicates in ASL. In: Lucas, Ceil (ed.), Sign Language Research. Theoretical Issues. Washington, DC: Gallaudet University Press, 358⫺374. Seifart, Frank 2005 The Structure and Use of Shape-based Noun Classes in Miraña (North West Amazon). PhD Dissertation, University of Nijmegen. Nijmegen: MPI Series in Psycholinguistics. Slobin, Dan I./Hoiting, Nini/Kuntze, Marlon/Lindert, Reyna/Weinberg, Amy/Pyers, Jennie/Anthony, Michelle/Biederman, Yael/Thumann, Helen 2003 A Cognitive/Functional Perspective on the Acquisition of ‘Classifiers’. In: Emmorey, Karen (ed.) Perspectives on Classifiers in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 271⫺298. Supalla, Ted 1980 Morphology of Verbs of Motion and Location in American Sign Language. In: Caccamise, Frank/Hicks, Don (eds.), Proceedings of the 2nd National Symposium of Sign Language Research and Teaching, 1978. Silver Spring, MD: National Association of the Deaf, 27⫺45. Supalla, Ted 1982 Structure and Acquisition of Verbs of Motion and Location in American Sign Language. PhD Dissertation, University of San Diego. Supalla, Ted 1986 The Classifier System in American Sign Language. In: Craig, Colette (ed.), Noun Classes and Categorization. Amsterdam: Benjamins, 181⫺214. Talmy, Leonard 1985 Lexicalization Patterns: Semantic Structure in Lexical Forms. In: Shopen, Timothy (ed.), Language Typology and Syntactic Description. Grammatical Categories and the Lexicon. Cambridge: Cambridge University Press, 57⫺149. Tang, Gladys 2003 Verbs of Motion and Location in Hong Kong Sign Language: Conflation and Lexicalization. In: Emmorey, Karen (ed.), Perspectives on Classifiers in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 143⫺165. Tang, Gladys/Sze, Felix Y. B./Lam, Scholastica 2007 Acquisition of Simultaneous Constructions by Deaf Children of Hong Kong Sign Language. In: Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno A. (eds.), Simultaneity in Signed Languages. Form and Function. Amsterdam: Benjamins, 283⫺316. Taub, Sarah F. 2001 Language from the Body: Iconicity and Metaphor in American Sign Language. Cambridge: Cambridge University Press. Voort, Hein van der 2004 A Grammar of Kwaza. Berlin: Mouton de Gruyter. Wallin, Lars 1996 Polysynthetic Signs in Swedish Sign Language. PhD Dissertation, University of Stockholm. Wallin, Lars 2000 Two Kinds of Productive Signs in Swedish Sign Language: Polysynthetic Signs and Size and Shape Specifying Signs. In: Sign Language and Linguistics 3, 237⫺256.
185
186
II. Morphology Wilbur, Ronnie B. 2008 Complex Predicates Involving Events, Time and Aspect: Is This Why Sign Languages Look so Similar? In: Quer, Josep (ed.), Signs of the Time. Selected Papers from TISLR 2004. Hamburg: Signum, 217⫺250. Young, Robert/Morgan, William 1987 The Navajo Language ⫺ A Grammar and Colloquial Dictionary. Albuquerque, NM: University of New Mexico Press. Zeshan, Ulrike 2003 ‘Classificatory’ Constructions in Indo-Pakistani Sign Language: Grammaticalization and Lexicalization Processes. In: Emmorey, Karen (ed.), Perspectives on Classifiers in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 113⫺141. Zwitserlood, Inge 1996 Who’ll HANDLE the OBJECT? An Investigation of the NGT-classifier. MA Thesis, Utrecht University. Zwitserlood, Inge 2003 Classifying Hand Configurations in Nederlandse Gebarentaal (Sign Language of the Netherlands). PhD Dissertation, Utrecht University. Utrecht: LOT. Zwitserlood, Inge 2008 Morphology Below the Level of the Sign ⫺ Frozen Forms and Classifier Predicates. In: Quer, Josep (ed.), Signs of the Time: Selected Papers from TISLR 2004. Hamburg: Signum, 251⫺272.
Inge Zwitserlood, Nijmegen (The Netherlands)
9. Tense, aspect, and modality 1. 2. 3. 4. 5. 6.
Introduction Tense Aspect Modality Conclusions Literature
Abstract Cross-linguistically, the grammatical categories tense, aspect, and modality ⫺ when they are overtly expressed ⫺ are generally realized by free morphemes (such as adverbials and auxiliaries) or by bound inflectional markers. The discussion in this chapter will make clear that this generalization also holds true for sign languages. It will be shown that tense is generally encoded by time adverbials and only occasionally (and only in a few sign languages) by verbal inflection. In contrast, various aspect types are realized on the lexical verb, in particular, by characteristic movement modulations. Only completive/ perfective aspect is commonly realized by free morphemes across sign languages. Finally,
9. Tense, aspect, and modality deontic and epistemic modality is usually encoded by dedicated modal verbs. In relation to all three grammatical categories, the possibility of (additional) non-manual marking and the issue of grammaticalization will also be addressed.
1. Introduction Generally, in natural languages, every sentence that is uttered must receive a temporal and aspectual interpretation as well as an interpretation with respect to modality, for instance, the possibility or necessity of occurrence of the event denoted by the main verb. Frequently, these interpretational nuances are not overtly specified. In this case, the required interpretation is either inferred from the context or the sentence receives a default interpretation. When information concerning tense, aspect, and/or modality (TAM) is overtly marked, this is usually done by means of verbal inflection or free morphemes such as adverbials or auxiliaries. Languages also show considerable variation with respect to what categories they mark. Sign languages are no exception in this respect. Interestingly, TAM-marking in a certain sign language is usually quite different from the patterns attested in the respective surrounding spoken language. In addition, a certain amount of variation notwithstanding, sign languages generally display strikingly similar patterns in the domain of TAM-marking (e.g. lack of tense inflection, rich systems of aspectual inflection, etc.), as will become clear in the following discussion. In this chapter, we will address the three categories subsumed under the label ‘TAM’ in turn. We first turn to tense marking (section 2), where we discuss common adverbial and less common inflectional strategies and also introduce the concept of ‘time lines’, which plays an important role in most sign languages studied to date. Section 3 on aspect provides an overview of the most common free and bound aspectual morphemes, their meaning, and phonological realization. Finally, in section 4, we turn to the encoding of modality, focusing on selected deontic and epistemic modal expressions. In all three sections, manual and non-manual strategies will be considered and grammaticalization issues will be briefly discussed. Also, wherever appropriate, an attempt is made to compare sign languages to each other.
2. Tense It has long been noted (Friedman 1975; Cogen 1977) that sign language verbs, just like verbs in many spoken languages, generally do not inflect for tense. Rather, tense is expressed by means of adverbials (section 2.1), which frequently make use of so-called ‘time lines’ (section 2.2). Still, it has been suggested for American Sign Language (ASL) and Italian Sign Language (LIS) that at least some verbs may inflect for tense ⫺ be it by means of manual or non-manual modulations; these proposals will be discussed in section 2.3.
187
188
II. Morphology
2.1. Adverbials and lexical tense markers Across sign languages, the most common strategy for locating an event on a time line with respect to the time of utterance is by means of adverbials. A sentence that contains no time reference is either interpreted within the time-frame previously established in the discourse or ⫺ by default ⫺ as present tense. Still, sign languages usually have a lexical sign meaning ‘now’, which may be used emphatically or for contrast to indicate present tense (Friedman 1975). Across sign languages, time adverbials commonly appear sentence-initially, as in the Spanish Sign Language (LSE) example in (1a) (Cabeza Pereiro/Fernández Soneira 2004, 69). They may either indicate a (more or less) specific point in time (e.g. past week (1a), yesterday, in-two-days) or more broadly locate the event in the future or past, as, for instance, the adverbial past in the German Sign Language (DGS) example in (1b). (1)
a. past week meeting start ten end quarter to three [LSE] ‘Last week the meeting started at ten and ended at a quarter to three.’ b. past peter index3 book write [DGS] ‘Peter wrote a book.’
According to Aarons et al. (1995, 238), in ASL time adverbials may occur in sentenceinitial (2a) or sentence-final position (2b), or between the subject and the (modal) verb (2c). (2)
a. tomorrow j-o-h-n buy car ‘John will buy a car tomorrow.’ b. j-o-h-n buy car tomorrow ‘John will buy a car tomorrow.’ c. j-o-h-n tomorrow can buy car ‘John can buy a car tomorrow.’
[ASL]
Note again that the lack of tense inflection is by no means a peculiarity of the visual modality. Some East Asian languages (e.g. Chinese) display the same property and thus also resort to the use of adverbials to set up a time-frame in discourse. Aarons et al. (1995) further argue that besides time adverbials, ASL also makes use of ‘lexical tense markers’ (LTMs). Superficially, at least some of these LTMs look very similar to time adverbials, but Aarons et al. show that they can be distinguished from adverbials on the basis of their syntactic distribution and articulatory properties. In the following, we only consider the LTM future-tns (other LTMs include past-tns, formerly-tns, and #ex-tns; also see Neidle et al. (2000, 77) for an overview). As for the syntactic distribution, lexical tense markers behave like modals; that is, they occur between the subject and the verb, they precede sentential negation (3a), and they cannot appear in infinitival complements. Crucially, a modal verb and a lexical tense marker cannot co-occur (3b) ⫺ irrespective of order (Aarons et al. 1995, 241f.). The authors therefore conclude that LTMs, just like modals, occupy the head of the Tense Phrase, a position different from that of adverbials (for a modal interpretation of future see section 4 below).
9. Tense, aspect, and modality
(3)
neg j-o-h-n future-tns not buy house ‘John will not buy the house.’ b. * j-o-h-n future-tns can buy house ‘John will be able to buy a house.’
a.
189
[ASL]
With respect to articulatory properties, Aarons et al. show that the path movement of time adverbials such as future-adv can be modulated to express a greater or lesser distance in time. In contrast, this variation in path length is excluded with LTMs, which have a fixed articulation. Taken together, this shows that LTMs are more restricted than time adverbials in both their articulatory properties and their syntactic distribution.
2.2. Time lines Concerning the articulation of time adverbials (and LTMs), it has been observed that almost all sign languages investigated to date make use of ‘time lines’ (see the previously mentioned references for ASL and LSE; see Brennan (1983) and Sutton-Spence/ Woll (1999) for British Sign Language (BSL), Schermer/Koolhof (1990) for Sign Language of the Netherlands (NGT), Massone (1994) for Argentine Sign Language (LSA), Schmaling (2000) for Hausa Sign Language, among many others). Time lines are based on culture-specific orientational metaphors (Lakoff/Johnson 1980). In many cultures, time is imagined as proceeding linearly and past and future events are conceptualized as lying either behind or before us. This conceptual basis is linguistically encoded, as, for instance, in the English metaphors ‘looking forward to something’ and ‘something lies behind me’. In sign languages, space can be used metaphorically in time expressions. Various time lines have been described, but here we will focus on the line that runs “parallel to the floor from behind the body, across the shoulder to ahead up to an arm’s length, on the signer’s dominant side” (Sutton-Spence/Woll 1999, 183; see Figure 9.1). Thus, in adverbials referring to the past (e.g. past, before, yesterday), path movement proceeds backwards (towards or over the shoulder, depending on distance in time), while in adverbials referring to the future (e.g. future, later, tomorrow), we observe forward movement from the body into neutral signing space ⫺ again, length of the path movement indicates distance in time. Present tense (as in now or today) is expressed by a downward movement next or in front of the body. Other time lines that have been described are located in front of the body, either horizontally (e.g. for duration in time) or vertically (e.g. for growth) (see, for instance, Schermer/Koolhof (1990) and Massone (1994) for illustrations). Interestingly, in some cultures, the flow of time is conceptualized differently, namely such that past events are located in the front (i.e. before our eyes) while future events lie behind the body (because one cannot see what has not yet happened). As before, this conceptual image is reflected in language (e.g. Malagasy; Dahl 1995), and it is expected that it will also be reflected in the sign language used in such a culture. An example of a sign language that does not make use of the time line illustrated in Figure 9.1. is Kata Kolok, a sign language used in a village in Bali (see chapter 24,
190
II. Morphology
Fig. 9.1: Time line, showing points of reference for past, present, and future
Shared Sign Languages, for discussion). Still, signers do refer to spatial positions in temporal expressions in a different way. For instance, given that the village lies close to the equator, pointing approximately 90° upwards signifies noon while pointing 180° to the west means six-o-clock(pm) (or more generally ‘late afternoon time’), based on the approximate position of the sun at the respective time (Marsaja 2008, 166).
2.3. Tense marking on the verb Sutton-Spence and Woll (1999, 116) note that in some dialects of BSL, certain verbs differ depending on whether the event is in the past or present (e.g. win/won, see/saw, go/went). These verb pairs, however, do not exemplify systematic inflection; rather, the past tense forms should be treated as lexicalized exceptions. Jacobowitz and Stokoe (1988) claim to have found systematic manual indications of tense marking in more than two dozen ASL verbs. For verbs like come and go, which involve path movement in their base form, they state that “extension (of the hand) at the wrist, (of the forearm) at the elbow, or (of the upper arm) at the shoulder” ⫺ or a combination thereof ⫺ will denote future tense. Similarly, “flexion at the wrist, elbow, or shoulder with no other change in the performance of an ASL verb” will denote past tense (Jacobowitz/Stokoe 1988, 337). The authors stress that the time line cannot be held responsible for these inflections as the direction of movement remains unchanged. Rather, the changes result in a slight movement or displacement on the vertical plane (extension of joints: upward; flexion of joints: downward). For instance, in order to express the meaning ‘will go’, the signer’s upper arm is extended at the shoulder. It is worth pointing out that the vertical scale has also been found to play a role in spoken language metaphor, at least in referring to future events (e.g. ‘What is coming up next week?’ (Lakoff/Johnson 1980, 16)). A systematic change that does involve the time line depicted in Figure 9.1 has been described for LIS by Zucchi (2009). Zucchi observes that in LIS, temporal information can be conveyed by means of certain non-manual (that is, suprasegmental) features that co-occur with the verb. The relevant feature is shoulder position: if the shoulder is tilted
9. Tense, aspect, and modality
191
backward (‘sb’), then the action took place before the time of utterance (past tense (4a)); if the shoulder is tilted forward (‘sf’), then the action is assumed to take place after the time of utterance (future tense (4b)). A neutral shoulder position (i.e. shoulder aligned with the rest of the body) would indicate present tense (Zucchi 2009, 101).
(4)
sb a. gianni house buy ‘Gianni bought a house.’ sf b. gianni house buy ‘Gianni will buy a house.’ c. tomorrow gianni house buy ‘Tomorrow Gianni will buy a house.’
[LIS]
Zucchi concludes that LIS is unlike Chinese and more like Italian and English in that grammatical tense is marked on verbs by means of shoulder position. He further shows that non-manual tense inflection is absent in sentences containing past or future time adverbs (4c), a pattern that is clearly different from the one attested in Italian and English (Zucchi 2009, 103). In fact, the combination of a time adverb and non-manual inflection leads to ungrammaticality.
3. Aspect While tense marking appears to be absent in most sign languages, many of the sign languages studied to date have rich systems of aspectual marking. Aspectual systems are commonly assumed to consist of two components, namely situation aspect and viewpoint aspect (Smith 1997). Situation aspect is concerned with intrinsic temporal properties of a situation (e.g. duration, repetition over time) while viewpoint aspect has to do with how a situation is presented (e.g. as closed or open). Another notion often subsumed under the term aspect is Aktionsart or lexical aspect, which describes the internal temporal structure of events. This category is discussed in detail in chapter 20 on lexical semantics (see also Wilbur 2008, 2011). Across sign languages, aspect is either marked by free functional elements (section 3.1) or by modulations of the verb sign (section 3.2), most importantly, by characteristic changes in the manner and frequency of movement, as first described in detail by Klima and Bellugi (1979). It is important to note that Klima and Bellugi interpreted the term ‘aspect’ fairly broadly and also included in their survey alterations that do not have an impact on the temporal structure of the event denoted by the verb, namely adverbial modifications such as manner (e.g. ‘slowly’) and degree (e.g. ‘intensively’) and distributional quantification (e.g. exhaustive marking; see chapter 7, Agreement, for discussion). We follow Rathmann (2005) in excluding these alterations from the following discussion.
3.1. Free aspectual markers For numerous (unrelated) sign languages, free grammatical markers have been described that convey completive and/or perfective aspect (i.e. viewpoint aspect). Com-
192
II. Morphology monly, these aspectual markers are grammaticalized from verbs (mostly finish) or adverbs (e.g. already) ⫺ a developmental path that is also frequently attested in spoken languages (Heine/Kuteva 2002; see also chapter 34 for grammaticalization in sign languages). In LIS, for instance, the lexical verb done (meaning ‘finish’, (5a)) can also convey aspectual meanings, such as perfective aspect in (5b) (Zucchi 2009, 123f). Note that the syntactic position differs: when used as a main verb, done appears in preverbal position, while in its use as an aspectual marker, it follows the main verb (a similar observation has been made for the ASL element finish by Fischer/Gough (1999 [1972]); for ASL, also see Janzen (1995); for a comparison of ASL and LIS, see Zucchi et al. (2010)). (5)
a. gianni cake done eat ‘Gianni has finished eating the cake.’ b. gianni house buy done ‘Gianni has bought a house.’
[LIS]
Meir (1999) provides a detailed analysis of the Israeli Sign Language (Israeli SL) perfect marker already. First, she shows that, despite the fact that this marker frequently occurs in past tense contexts, it is not a past tense marker, but rather an aspectual marker denoting perfect constructions; as such, it can, for instance, also co-occur with time adverbials denoting future tense. Following Comrie (1976), she argues that “constructions with already convey the viewpoint of ‘a present state [which] is referred to as being the result of some past situation’ (Comrie 1976, 56)” (Meir 1999, 50). Among the manifestations of that use of already are the ‘experiental’ perfect (6a) and the perfect denoting a terminated (but not necessarily completed) situation (6b) (adapted from Meir 1999, 50 f.). (6)
a. index2 already eat chinese? ‘Have you (ever) eaten Chinese food?’ b. index1 already write letter sister poss1 ‘I have written a letter to my sister.’
[Israeli SL]
Meir (1999) also compares already to its ASL counterpart finish and shows that the functions and uses of already are more restricted. She hypothesizes that this might result from the fact that Israeli SL is a much younger language than ASL and that therefore, already has not yet grammaticalized to the same extent as finish. Alternatively, the differences might be due to the fact that the two functional elements have different lexical sources: a verb in ASL, but an adverb in Israeli SL. For Greek Sign Language (GSL), Sapountzaki (2005) describes three different signs in the set of perfective markers: been, for ‘done, accomplished, experienced’ (7a); its negative counterpart not-been, for ‘not done, accomplished, experienced’ (7b); and not-yet for ‘not yet done, accomplished, experienced’ (also see chapter 15, Negation, for discussion of negative aspectual markers). (7)
a. yesterday ctella month ago letter send been ‘Yesterday she told me that she had sent the letter a month ago.’ b. granddad lesson not-been ‘Grandpa had not gone to school.’
[GSL]
9. Tense, aspect, and modality
193
The use of similar completive/perfective markers has also been described for BSL (Brennan 1983), DGS (Rathmann 2005), Swedish Sign Language (SSL, Bergman/Dahl 1994), and Turkish Sign Language (TİD, Zeshan 2003). Zeshan (2003, 49) further points out that Indopakistani Sign Language (IPSL) has a free completive aspect marker “that is different from and independent of two signs for finish and is used as an aspect marker only”. For NGT, Hoiting and Slobin (2001) describe a free marker of continuous/habitual aspect, which they gloss as through. This marker is used when the lexical verb cannot inflect for aspect by means of reduplication (see section 3.2) due to one of the following phonological constraints: (i) it has internal movement or (ii) it includes body contact. The sign try, in which the R-hand makes contact with the nose, exemplifies constraint (ii); see example (8) (adapted from Hoiting/Slobin 2001, 129). Note that the elliptical reduplication characteristic of continuous/habitual inflection is still present; however, it accompanies through rather than the main verb. Hoiting and Slobin argue that use of through is an example of borrowing from spoken Dutch, where the corresponding element door (‘through’) can be used with some verbs to express the same aspectual meanings. (8)
index3 try throughCC ‘He tried continuously / He tried and tried and tried.’
[NGT]
3.2. Aspectual inflection on verbs Building on earlier work on ASL verbal reduplication by Fischer (1973), Klima and Bellugi (1979) provide a list of aspectual distinctions that can be marked on ASL verbs, which includes no less than 15 different aspect types. They point out that the attested modulations are characterized by “dynamic qualities and manners of movement” such as reduplication, rate of signing, tension, and pauses between cycles of reduplication, and they also provide evidence for the morphemic status of these modulations. Given considerable overlap in meaning and form of some of the proposed aspect types, later studies attempted to re-group the proposed modulations and to reduce their number (e.g. Anderson 1982; Wilbur 1987). More recently, Rathmann (2005) suggested that in ASL six aspectual morphemes have to be distinguished: the free aspectual marker finish (discussed in the previous section) as well as the bound inflectional morphemes continuative, iterative, habitual, hold, and conative. Only the first three of these morphemes ⫺ all of which belong to the class of situation aspect ⫺ will be discussed in some detail below. Before turning to the discussion of aspectual morphemes, however, we wish to point out that not all scholars are in agreement about the inflectional nature of these morphemes. Based on a discussion of aspectual reduplication in SSL, Bergman and Dahl (1994), for instance, argue that the morphological process involved is ideophonic rather than inflectional. According to Bergman and Dahl (1994, 412 f.), “ideophones are usually a class of words with peculiar phonological, grammatical, and semantic properties. Many ideophones are onomatopoetic [...]. A typical ideophone can be seen as a global characterization of a situation”. In particular, they compare the system of aspectual
194
II. Morphology reduplication in SSL to a system of ideophones (‘expressives’) found in Kammu, a language spoken in Laos. These ideophones are characterized by “their iconic and connotative rather than symbolic and denotative meaning” (Svantesson 1983; cited in Bergman/Dahl 1994, 411). We cannot go into detail here concerning the parallels between Kammu ideophones and SSL aspectual reduplication, but it is important to note that both systems involve a certain degree of iconicity and that Bergman and Dahl (1994, 418) conclude “that the gestural-visual character of signed languages favors the development of iconic or quasi-iconic processes like reduplication” to express ideophonic meanings similar to that of Kammu expressives.
3.2.1. Continuative The label ‘continuative’, as used by Rathmann (2005), also includes the aspectual modulations ‘durative’ and ‘protractive’ suggested by Klima and Bellugi. According to Rathmann (2005, 36), the semantic contribution of the continuative morpheme is that “the temporal interval over which the eventuality unfolds is longer than usual and uninterrupted”. For instance, combination of the morpheme with the verb study yields the meaning ‘to study for a long time’. There are strong similarities across sign languages in how continuative is marked. Most frequently, ‘slow reduplication’ is mentioned as an integral component of this aspect type. In more detailed descriptions, the modulation is described as involving slow arcing movements. According to Aronoff, Meir, and Sandler (2005, 311), for instance, ASL durative aspect is marked by “superimposing an arc-shaped morpheme on the movement of the LML sign, and then reduplicating, to create a circular movement” (LML = location-movement-location). Sutton-Spence and Woll (1999) note that in BSL verbs that do not have path movement, such as look and hold, continuative is marked by an extended hold. Hoiting and Slobin (2001, 127) describe continuative aspect in NGT as involving “three repetitions of an elliptical modulation accompanied by pursed lips and a slight blowing gesture” (see section 3.3 for further discussion of non-manual components).
3.2.2. Iterative Rathmann (2005) subsumes three of the aspect types distinguished by Klima and Bellugi (1979) under the label ‘iterative’: the ‘incessant’, ‘frequentative’, and ‘iterative’ (note that Wilbur (1987) groups the incessant, which implies the rapid recurrence of a characteristic, together with the habitual). The meaning contributed by the iterative morpheme can be paraphrased as ‘over and over again’ or ‘repeatedly’, that is, multiple instances of an eventuality. Phonologically, the morpheme is realized by reduplication of the movement of the verb root. Several sign languages have forms that look similar to the iterative morpheme in ASL. Bergman and Dahl (1994), for instance, describe fast reduplication in SSL, with repeated short movements. Sutton-Spence and Woll (1999) find similar patterns in BSL. Similarly, Zeshan (2000) for IPSL and Senghas (1995) for Nicaraguan Sign Language (ISN) describe repeated movements executed in the same location as being characteristic for iterative aspect.
9. Tense, aspect, and modality
3.2.3. Habitual The ‘habitual’ is similar to the ‘iterative’ in that it also describes the repetition of an eventuality. The habitual, however, expresses the notion of a pattern of events or behaviours rather than the quality of a specific event. Thus, the semantic contribution of the habitual morpheme can be paraphrased as ‘regularly’ or ‘usually’. Also, in contrast to the iterative morpheme, the habitual morpheme does not assume that there is an end to the repetition of the eventualities. Just like the iterative, the habitual is phonologically realized by reduplication. Klima and Bellugi (1979) and Rathmann (2005), however, point out that the habitual morpheme involves smaller and faster movement than the iterative morpheme. Again, similar marking has been attested in other sign languages. Cabeza Pereiro and Fernández Soneira (2004, 76), for instance, also mention that LSE uses repetition of movement to indicate habitualness. Interestingly, Hoiting and Slobin (2001, 127) describe a somewhat different pattern for NGT; they observe that in this sign language, the habitual is characterized by “slower elliptical modulation accompanied by gaze aversion, lax lips with protruding tongue, and slowly circling head movement”.
3.2.4. Other aspectual morphemes The aspect types introduced in the previous sections are the ones most commonly discussed in the sign language literature. We want to briefly mention some further aspect types that have been suggested, without going into details of their phonological realization (see Rathmann (2005) for details). First, there is the ‘unrealized inceptive’ (Liddell 1984), the meaning of which can be paraphrased as ‘was about to … but’. Second, Brentari (1998) describes the ‘delayed completive’, which adds the meaning of ‘at last’ to the verb. Thirdly, Jones (1978) identifies an aspectual morpheme which he labels ‘unaccomplished’ and which expresses that an event is unfinished in present (‘to attempt to’, ‘to be in the process of’). Despite semantic differences, Rathmann (2005) suggests to subsume these three aspect types under a single label ‘conative’, an attempt that has been criticized by other scholars. He argues that what these aspect types have in common is that “there is an attempt for the eventuality to be carried out” (Rathmann 2005, 47). Rathmann further describes a ‘hold’ morpheme, which adds a final endpoint to an event, thereby signalling that the event is interrupted or terminated (without necessarily being completed). Zeshan (2003) claims that TİD, besides two free completive aspect markers comparable to the ones discussed in section 3.1, has a simultaneous morpheme for completive aspect which may combine with some verbs ⫺ a strategy which appears to be quite unique cross-linguistically. The phonological reflex of this morpheme consists of “a single accentuated movement, which may have a longer movement path than its noncompletive counterpart and may be accompanied by a single pronounced head nod or, alternatively, a forward movement of the whole torso” (Zeshan 2003, 51). She provides examples involving the verbs see, do, and go and points out that, for phonological reasons, the morpheme cannot combine with verbs that consist of a hold only (e.g. think).
195
196
II. Morphology
3.3. Non-manual aspect marking Above, we mentioned in passing that certain aspect types may be accompanied by non-manual markers. The continuative, for instance, commonly involves puffed cheeks and/or pursed lips and blowing of air while performing the characteristic reduplication (Hoiting/Slobin 2001, 127). For SSL, Bergman (1983) observes that, at least with some verbs, durative aspect (e.g. ‘think for a long time’) can be realized with the hand held still while the head performs a cyclical arc movement ⫺ that is, in a sense, the head movement replaces the hand movement. Grose (2003) argues that in ASL, a head nod commonly occurs in sentences with a perfective interpretation, independent of their temporal specification. The head nod may accompany an aspectual marker like finish, but it may also be the sole marker of perfectivity, co-occurring with a lexical sign or appearing in clause-final position. In example (9), the past reading comes from the sign past, while the perfective reading comes from the head nod (‘hn’) (Grose 2003, 54).
(9)
hn index1 past walk school ‘I have walked to school / I used to walk to school.’
[ASL]
4. Modality In spoken languages, modal expressions are typically verbal auxiliaries. From a semantic point of view, modals convey deontic or epistemic modality. Deontic modality has to do with the necessity or possibility of a state of affairs according to a norm, a law, a moral principle, or an ideal. The related meanings are obligation, permission, or ability. Conversely, epistemic modality is related to the signer’s knowledge about the world (Palmer 1986). What is possible or necessary in a world according to a signer’s knowledge depends on his or her epistemic state. In many languages (sign languages included), modal expressions are often ambiguous between epistemic and deontic readings. This is illustrated by the English example in (10). (10)
Mary must be at home. a. Given what I know, it is necessary that she is at home now (epistemic). b. Given some norms, it is necessary that she is at home now (deontic).
The grammatical category of modality as well as modal expressions have been described for different sign languages: for ASL, see Wilcox and Wilcox (1995), Janzen and Shaffer (2002) and Wilcox and Shaffer (2006); for GSL, see Sapountzaki (2005); for LSA, see Massone (1994), and for Brazilian Sign Language (LSB), see Ferreira Brito (1990). Note that some modal verbs have dedicated negative forms due to cliticization or suppletion (for negative modals, see Shaffer (2002) and Pfau/Quer (2007); see also chapter 15 on negation).
9. Tense, aspect, and modality
197
4.1. Deontic modality In their study on the expression of modality in ASL, Wilcox and Shaffer (2006) distinguish between ‘participant-internal’ and ‘participant-external’ necessity and possibility. Just like in English, necessity and possibility are mainly expressed by modal verbs/ auxiliaries. In addition, the manual verb signs are typically accompanied by specific non-manual markers such as, for instance, furrowed eyebrows, pursed lips, or head nod, typically indicating the degree of modality. In ASL, the deontic modality of necessity is expressed by the modal must/should as is illustrated in the examples in (11), which are adapted from Wilcox and Shaffer (2006, 215; ‘bf’ = brow furrowing). The sign is performed with a crooked index finger (cf. also Wilcox/Wilcox 1995).
(11)
top a. before class must lineup(2h) [ASL] ‘Before class we had to line up.’ b. (leaning back) should cooperate, work together, interact forget bf (gesture) past push-away new life from-now-on should ‘They (the deaf community) should cooperate and work together, they should forget about the past and start anew.’
The examples in (11) describe an external deontic necessity where the obligation is imposed by some external source, that is, either an authority or general circumstances. An example for a participant-internal use of the modal must/should is given in (12), again adopted from Wilcox and Shaffer (2006, 217).
(12)
top know south country (waits for attention) know south country [ASL] top spanish food strong chile must index1 (leans back) ‘You know how it is in the southern part. You know how it is with Spanish food. In the southern part, there’s a lot of hot chile. I have to have chile.’
The DGS deontic modal must looks similar to the corresponding ASL modal. As opposed to ASL must, the DGS sign is signed with an extended index finger and palm orientation towards the contra-lateral side of the signing space. For the expression of the deontic meaning of possibility, the modal verb can is used in ASL. As in English, can is not only used to express physical or mental ability, but also to indicate permission or the possibility of an event occurring. Again, the condition for the situation described by the sentence can be participant-internal, as in (13a), or participant-external, as in (13b). The first use of can can be paraphrased as ‘the signer has the (physical) ability to do something’, while the second one involves permission, that is, ‘the teacher is allowed to do something’ (Wilcox/Shaffer 2006, 221 f). (13)
a. index1can lift-weight 100 pounds ‘I can lift one hundred pounds.’
[ASL]
198
II. Morphology b. poss1mother time teach, teach can sign but always fingerspellCCC ‘In my mother’s time the teachers were allowed to sign, but they always fingerspelled.’ On the basis of historical sources, Wilcox and Wilcox (1995) argue that the ASL modals can and must have developed from gestural sources via lexical elements. can has originated from a lexical sign meaning strong/power, which in turn can be traced back to a gesture ‘strong’ in which the two fists perform a short tense downward movement in front of the body. Interestingly, the modal can has undergone some phonological changes. In particular, the orientation of the hands has changed. Likewise, Wilcox and Wilcox assume that the modals must/should have developed from a gestural source, which is a deictic pointing gesture indicating monetary debt. This gesture entered the lexicon of Old French Sign Language and ⫺ due to the influence of (Old) French Sign Language on ASL ⫺ the lexicon of ASL. In both sign languages, the lexical sign grammaticalized into a deontic modal expressing strong (i.e. must in (11a) above) or weak (should in (11b) above) obligation. Again, the modals have undergone some phonological changes. Both modals are phonologically reduced in that the base hand present in the source sign owe is lost. But they differ from each other with respect to movement: must has one downward movement while the movement of should is shorter and reduplicated (cf. also Janzen/Shaffer 2002; Wilcox/Shaffer 2006; for similar LSC examples, see Wilcox 2004; for grammaticalization in sign languages, see Pfau/ Steinbach (2006) and chapter 34, Lexicalization and Grammaticalization). The system of modal expressions in DGS is very similar to that of ASL. One difference is that we are not aware of a lexical sign that must could have been derived from. It is, however, clearly related to a co-speech gesture that commonly accompanies orders and commands. We therefore assume that the DGS modal, unlike the corresponding ASL modal, is directly derived from a gestural source. In comparison to ASL and DGS, LSB appears to have a greater number of different modal expressions at its disposal. Moreover, these modal expressions belong to different parts of speech. Ferreira Brito (1990) analyzes the LSB modals need, can, prohibit, have-not, and let as verbs, obligatory, prohibited, optional1, and optional2 as adjectives, and obligation as a noun. Note finally that the development of modal verbs expressing physical/mental ability and possibility from a lexical element is attested in spoken languages, too. Latin potere (‘to be able’), for instance, is related to the adjective potens (‘strong, powerful’). Also, modal verbs that express obligation may be grammaticalized from lexical items that refer explicitly to concepts related to obligation, such as ‘owe’ (cf. Bybee/Perkins/Pagliuca 1994).
4.2. Epistemic modality In the previous section, we saw that the deontic interpretation of modal verbs basically affects the necessity or possibility of a participant to do something. By contrast, the more grammaticalized epistemic interpretation of modal verbs indicates the signer’s degree of certainty about or the degree of commitment to the truth of an utterance (Palmer 1986). In LSB, for instance, a high degree of certainty is expressed by the
9. Tense, aspect, and modality sentence-final modal construction have certainty, as illustrated in (14a) taken from Ferreira Brito (1990, 236). In ASL, like in DGS, epistemic modality is realized not only by modal verbs alone, but by a combination of modals and additional manual and non-manual markers. According to Wilcox and Shaffer (2006, 226 f.), the epistemic modal verb should, which expresses a high degree of certainty, occupies the sentencefinal position in ASL. In addition, the non-manual markers ‘head nod’ (‘hn’) and ‘bf’ accompany the modal verb (14b). Likewise, the modal verb possible appears sentencefinally and it is also accompanied by a head nod (14c). (14)
a. today rain … have certainty [LSB] ‘I am certain that today it will rain.’ top bf/hn b. library have deaf life should [ASL] ‘The library should have Deaf Life/I’m sure the library has Deaf Life.’ top bf/hs c. same sign because bad translation false c-o-g-n-a-t-e doubt hn (pause) (gesture “well”) possible ‘I doubt the two concepts share the same sign (now) because of a problem with translation, or because of a false cognate, but, well, I suppose it’s possible.’
Besides non-manual markers, manual markers such as sharp and short movements vs. soft and reduplicated movements may also have an impact on the interpretation of the modal verb. Whereas sharp and short movements trigger a stronger commitment, soft and reduplicated movements indicate a weaker commitment (Wilcox/Shaffer 2006). In addition to the modal verbs in (14), ASL also uses semi-grammaticalized expressions such as feel, obvious, and seem to express epistemic modality (Wilcox/Wilcox 1995). Again, in their epistemic interpretation, feel, obvious, and seem are often accompanied by a head nod and furrowed eyebrows. Interestingly, the sign future cannot only be used as a lexical tense marker future-tns (as discussed in section 2.1) but also as a lexical marker of epistemic modality, cf. example (15a), which is adopted from Wilcox and Shaffer (2006, 228). A similar observation has been made by Massone (1994, 128) for the sentence-final LSA temporal marker in-the-future (15b). Let us discuss example (15a) in some more detail: The first occurrence of future, which is articulated with raised eyebrows, a manual wiggle marker, and longer softer movement, receives the temporal interpretation future-tns. The second occurrence is performed with short and sharp movements and accompanied by the two non-manual markers head nod and furrowed eyebrows, which are typical for the epistemic interpretation. (15)
a. rt 29 think-like ix3 r-o-c-k-v-i-l-l-e p-i-k-e ix3 build+ ix3 [ASL] top bf/hn future(wiggle) develop future s-o why must 1move3 near columbia mall? ‘(I live off) route 29, the Rockville Pike area. In the future I’m sure they will develop that area. So why do I have to move all the way up near Columbia Mall?’
199
200
II. Morphology b. maria ix 3aabandon3b in-the-future ‘Maria will abandon him.’
[LSA]
In many sign languages, the degree of modality seems to be marked mainly by nonmanual means. Wilcox and Shaffer (2006, 229) argue that “it is appropriate to discuss the semantics of modal strength as a matter of degree intensification ⫺ that is, as variation along a scale of intensification of necessity, possibility, and speaker’s epistemic commitment”. Since manual and non-manual modification is a frequent means to express intensification in many sign languages, the use of both markers in modal intensification comes as no surprise. An alternative strategy would be to use lexical expressions. Ferreira Brito’s (1990) discussion of modal expressions in LSB shows that LSB chooses the second strategy in that it uses a variety of lexical modal expressions to realize modal intensification. Note finally, that speaker- and addressee-oriented (epistemic) meaning nuances such as reference to common knowledge, reference to evident knowledge, or uncertainty, are also extensively discussed in Herrmann (2010). In many spoken languages, such meanings are, for example, triggered by modal particles or equivalent expressions. A main result of Herrmann’s typological study, which compares three sign languages (DGS, NGT, and Irish Sign Language), is that all three sign languages investigated use mainly non-manual means to express such nuances of meaning.
5. Conclusion Sign languages employ free and bound grammatical markers to express the grammatical categories of tense, aspect, and modality. While across sign languages, free morphemes ⫺ time adverbials or lexical tense markers ⫺ are the most common strategy for encoding tense, various aspect types can be realized by verbal inflections, many of which involve characteristic movement alterations in combination with reduplication. The encoding of completive/perfective aspect is exceptional in this respect, as these aspect types are usually realized by free grammatical morphemes. The same is true for modality distinctions, which are generally expressed by modal verbs. The discussion has made clear that, when it comes to TAM-marking, sign languages are strikingly similar to each other ⫺ a pattern that is also familiar from the study of other inflectional categories such as pluralization, agreement, and classification (see chapters 6, 7, and 8); also see Meier (2002). The attested free TAM-markers are also highly interesting from a diachronic perspective because they involve grammaticalization pathways that are well-known from the study of TAM-systems in spoken languages (Bybee/Perkins/Pagliuca 1994): future tense markers may develop from movement verbs, completive and perfective aspect markers are commonly grammaticalized from adverbials and verbs, and modals develop from adjectives and verbs. The latter are particularly interesting in this context because the lexical source of a modal can sometimes be traced back to a gestural source. While aspects of the TAM-systems of at least some sign languages are fairly well understood, further research is required to identify (obligatory and optional) non-man-
9. Tense, aspect, and modality ual markers, to distinguish truly inflectional non-manuals from non-manual adverbials, and to investigate possible gestural sources for the non-manuals involved in TAMmarking.
6. Literature Aarons, Debra/Bahan, Ben/Kegl, Judy/Neidle, Carol 1995 Lexical Tense Markers in American Sign Language. In: Emmorey, Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Erlbaum, 225⫺253. Anderson, Lloyd B. 1982 Universals of Aspect and Parts of Speech: Parallels Between Signed and Spoken Languages. In: Hopper, Paul J. (ed.), Tense ⫺ Aspect: Between Semantics and Pragmatics. Amsterdam: Benjamins, 91⫺114. Aronoff, Mark/Meir, Irit/Sandler, Wendy 2005 The Paradox of Sign Language Morphology. In: Language 81(2), 301⫺344. Bergman, Brita 1983 Verbs and Adjectives: Morphological Processes in Swedish Sign Language. In: Kyle, Jim/Woll, Bencie (eds.), Language in Sign: An International Perspective on Sign Language. London: Croom Helm, 3⫺9. Bergman, Brita/Dahl, Östen 1994 Ideophones in Sign Language? The Place of Reduplication in the Tense-aspect System of Swedish Sign Language. In: Bache, C./Basbøll, H./Lindberg, C.-E. (eds.), Tense, Aspect and Action: Empirical and Theoretical Contributions to Language Typology. Berlin: Mouton de Gruyter, 397⫺422. Brennan, Mary 1983 Marking Time in British Sign Language. In: Kyle, Jim/Woll, Bencie (eds.), Language in Sign: An International Perspective on Sign Language. London: Croom Helm, 10⫺31. Brentari, Diane 1998 A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press. Bybee, Joan L./Perkins, Revere D./Pagliuca, William 1994 The Evolution of Grammar: Tense, Aspect, and Modality in the Languages of the World. Chicago: Chicago University Press. Cabeza Pereiro, Carmen/Fernández Soneira, Ana 2004 The Expression of Time in Spanish Sign Language (SLE). In: Sign Language & Linguistics 7(1), 63⫺82. Cogen, Cathy 1977 On Three Aspects of Time Expression in American Sign Language. In: Friedman, Lynn A. (ed.), On the Other Hand: New Perspectives on American Sign Language. New York: Academic Press, 197⫺214. Comrie, Bernard 1976 Aspect. Cambridge: Cambridge University Press. Dahl, Øyvind 1995 When the Future Comes from Behind: Malagasy and Other Time Concepts and Some Consequences for Communication. In: International Journal of Intercultural Relations 19(2), 197⫺209. Ferreira Brito, Lucinda 1990 Epistemic, Alethic, and Deontic Modalities in a Brazilian Sign Language. In: Fischer, Susan D./Siple, Patricia (eds.), Theoretical Issues in Sign Language Research. Vol. 1: Linguistics. Chicago: University of Chicago Press, 229⫺260.
201
202
II. Morphology Fischer, Susan D. 1973 Two Processes of Reduplication in the American Sign Language. In: Foundations of Language 9, 469⫺480. Fischer, Susan/Gough, Bonnie 1999 [1972] Some Unfinished Thoughts on finish. In: Sign Language & Linguistics 2(1), 67⫺77. Friedman, Lynn A. 1975 Space, Time, and Person Reference in American Sign Language. In: Language 51(4), 940⫺961. Grose, Donovan R. 2003 The Perfect Tenses in American Sign Language: Nonmanually Marked Compound Tenses. MA Thesis, Purdue University, West Lafayette. Heine, Bernd/Kuteva, Tania 2002 World Lexicon of Grammaticalization. Cambridge: Cambridge University Press. Herrmann, Annika 2009 Modal Particles and Focus Particles in Sign Languages. A Cross-linguistic Study of DGS, NGT, and ISL. PhD Dissertation, University of Frankfurt/Main (to be published in the Series Sign Language and Deaf Communities, Mouton de Gruyter). Hoiting, Nini/Slobin, Dan I. 2001 Typological and Modality Constraints on Borrowing: Examples from the Sign Language of the Netherlands. In: Brentari, Diane (ed.), Foreign Vocabulary in Sign Languages. A Cross-linguistic Investigation of Word Formation. Mahwah, NJ: Erlbaum, 121⫺137. Jacobowitz, Lynn/Stokoe, William C. 1988 Signs of Tense in ASL Verbs. In: Sign Language Studies 60, 331⫺339. Janzen, Terry 1995 The Poligrammaticalization of finish in ASL. MA Thesis, University of Manitoba, Winnipeg. Janzen, Terry/Shaffer, Barbara 2002 Gesture as the Substrate in the Process of ASL Grammaticization. In: Meier, Richard/ Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 199⫺223. Jones, Philip 1978 On the interface of ASL Phonology and Morphology. In: Communication and Cognition 11, 69⫺78. Klima, Edward/Bellugi, Ursula 1979 The Signs of Language. Cambridge, MA: Harvard University Press. Lakoff, George/Johnson, Mark 1980 Metaphors We Live by. Chicago: University of Chicago Press. Liddell, Scott K. 1984 Unrealized Inceptive Aspect in American Sign Language: Feature Insertion in Syllabic Frames. In: Drogo, Joseph/Mishra, Veena/Testen, David (eds.), Papers from the 20 th Regional Meeting of the Chicago Linguistic Society. Chicago: University of Chicago Press. 257⫺270. Marsaja, I Gede 2008 Desa Kolok ⫺ A Deaf Village and Its Sign Language in Bali, Indonesia. Nijmegen: Ishara Press. Massone, Maria Ignacia 1994 Some Distinctions of Tense and Modality in Argentine Sign Language. In: Ahlgren, Inger/Bergman, Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure. Papers from the Fifth International Symposium on Sign Language Research. Durham: ISLA, 121⫺130.
9. Tense, aspect, and modality Meier, Richard P. 2002 Why Different, Why the Same? Explaining Effects and Non-effects of Modality Upon Linguistic Structure in Sign and Speech. In: Meier, Richard P./Cormier, KearsyA./ Quinto-Pozos, David G. (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 1⫺25. Meir, Irit 1999 A Perfect Marker in Israeli Sign Language. In: Sign Language & Linguistics 2(1), 43⫺62. Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Ben/Lee, Robert G. 2000 The Syntax of American Sign Language. Functional Categories and Hierarchical Structure. Cambridge, MA: MIT Press. Palmer, Frank R. 1986 Mood and Modality. Cambridge: Cambridge University Press. Pfau, Roland/Quer, Josep 2007 On the Syntax of Negation and Modals in German Sign Language (DGS) and Catalan Sign Language (LSC). In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation. Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 129⫺161. Pfau, Roland/Steinbach, Markus 2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign Languages. In: Linguistics in Potsdam 24, 5⫺94. Rathmann, Christian 2005 Event Structure in American Sign Language. PhD Dissertation, University of Texas at Austin. Sapountzaki, Galini 2005 Free Functional Elements of Tense, Aspect, Modality and Agreement as Possible Auxiliaries in Greek Sign Language. PhD Dissertation, Centre of Deaf Studies, University of Bristol. Schermer, Trude/Koolhof, Corline 1990 The Reality of Time-lines: Aspects of Tense in Sign Language of the Netherlands (SLN). In: Prillwitz, Siegmund/Vollhaber, Tomas (eds.), Proceedings of the Forth International Symposium on Sign Language Research. Hamburg: Signum, 295⫺305. Schmaling, Constanze 2000 Maganar Hannu: Language of the Hands. A Descriptive Analysis of Hausa Sign Language. Hamburg: Signum. Senghas, Ann 1995 Children’s Contribution to the Birth of Nicaraguan Sign Language. PhD Dissertation, MIT, Cambridge, MA. Shaffer, Barbara 2002 can’t: The Negation of Modal Notions in ASL. In: Sign Language Studies 3(1), 34⫺53. Smith, Carlotta 1997 The Parameter of Aspect (2nd Edition). Dordrecht: Kluwer. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language. An Introduction. Cambridge: Cambridge University Press. Wilbur, Ronnie B. 1987 American Sign Language: Linguistic and Applied Dimensions. Boston: College-Hill. Wilbur, Ronnie B. 2008 Complex Predicates Involving Events, Time and Aspect: Is This Why Sign Languages Look so Similar? In: Quer, Josep (ed.), Signs of the Time: Selected Papers from TISLR 2004. Hamburg: Signum, 217⫺250.
203
204
II. Morphology Wilbur, Ronnie 2010 The Semantics-Phonology Interface. In: Brentari, Diane (ed.), Sign Languages. Cambridge Language Surveys. Cambridge: Cambridge University Press, 357⫺382. Wilcox, Sherman/Shaffer, Barbara 2006 Modality in American Sign Language. In: Frawley, William (ed.), The Expression of Modality. Berlin: Mouton de Gruyter, 207⫺237. Wilcox, Sherman/Wilcox, Phyllis 1995 The Gestural Expression of Modality in ASL. In: Bybee, Joan/Fleischman, Suzanne (eds.), Modality in Grammar and Discourse. Amsterdam: Benjamins, 135⫺162. Zeshan, Ulrike 2000 Sign Language in Indo-Pakistan. A Description of a Signed Language. Amsterdam: Benjamins. Zeshan, Ulrike 2003 Aspects of Türk İşaret Dili (Turkish Sign Language). In: Sign Language & Linguistics 6(1), 43⫺75. Zucchi, Sandro 2009 Along the Time Line: Tense and Time Adverbs in Italian Sign Language. In: Natural Language Semantics 17, 99⫺139. Zucchi, Sandro/Neidle, Carol/Geraci, Carlo/Duffy, Quinn/Cecchetto, Carlo 2010 Functional Markers in Sign Languages. In: Brentari, Diane (ed.), Sign Languages (Cambridge Language Surveys). Cambridge: Cambridge University Press, 197⫺224.
Roland Pfau, Amsterdam (The Netherlands) Markus Steinbach, Göttingen (Germany) Bencie Woll, London (United Kingdom)
10. Agreement auxiliaries 1. 2. 3. 4. 5. 6. 7.
Introduction Form and function of agreement auxiliaries Agreement auxiliaries in different sign languages ⫺ a cross-linguistic comparison Properties of agreement auxiliaries in sign languages Grammaticalization of auxiliaries across modalities Conclusion Literature
Abstract In this chapter, I summarize and discuss findings on agreement auxiliaries from various sign languages used across the world today. These functional devices have evolved in order to compensate for the ‘agreement gap’ left when a plain verb is the main verb of a sentence. Although tracing back the evolutionary path of sign language auxiliaries can be quite risky due to the scarcity of documentation of older forms of these languages,
10. Agreement auxiliaries internal reconstruction of the grammaticalization paths in sign languages, cross-checked with cross-linguistic tendencies of grammaticalization of auxiliaries in spoken languages provides us with some safe assumptions: grammaticalization follows more or less the same pathways irrespective of the visual-gestural modality of sign languages. At the same time, however, the development of sign language auxiliaries exhibits some unique characteristics, such as the possibility for a signed language agreement auxiliary to have a nominal, a pronominal, or even a gestural source of grammaticalization.
1. Introduction Agreement between the verb and its arguments (i.e. subject and object or source and goal) in a sentence is one of the essential parts of the grammar in many languages. Most sign languages, like many spoken languages, possess inflectional mechanisms for the expression of verbal agreement (see chapter 7 for verb agreement). Auxiliaries, that is, free grammatical elements accompanying the main verb of the sentence, are not amongst the most usual means of expressing agreement in spoken languages (Steele 1978, 1981). Hence, the wide use of agreement auxiliaries in sign languages has become an issue of great interest (Steinbach/Pfau 2007). As discussed in chapter 7, in sign languages, verbal agreement is realized by modification of path movement and/or hand orientation of the verb stem, thereby morphosyntactically marking subject and object (or source/agent and goal/patient) in a sentence. Agreement auxiliaries use the same means for expressing agreement as agreement verbs do. They are mainly used with plain verbs, which cannot inflect for agreement. Agreement auxiliaries are either semantically empty, or their lexical meaning is very weak (i.e. light verbs); they occupy similar syntactic positions as inflected verbs or (in the case of light-verb-like auxiliaries) seem to be part of a serial verb construction. Only in a few cases, they are able to inflect for aspect, but they commonly have reciprocal forms (Sapountzaki 2005; Quer 2006; Steinbach/Pfau 2007; de Quadros/ Quer 2008). However, although sign languages have been considered unique as to their rich morphological agreement expressions, unbound agreement auxiliaries were until recently under-researched (Smith 1991; Fischer 1996). The focus of this chapter is on the grammatical functions, as well as on the evolutionary processes which have shaped this set of free functional elements that are used as agreement auxiliaries in many genetically unrelated sign languages. It is not the main purpose of this study to give a detailed account of each and every auxiliary, although such information will be employed for outlining the theoretical issues related to sign language agreement auxiliaries. Specifically in the case of sign languages, the device of agreement auxiliaries is closely related to at least three other issues, which are discussed in depth in other chapters of this volume, namely morphological agreement inflection (see chapter 7), indexical pronouns (see chapter 11), and grammaticalization (see chapter 34 for further discussion). The present study builds on the information and assumptions provided in these three chapters and it attempts to highlight links between them so that the grammatical function and the historical development of agreement auxiliaries will become clearer. This chapter is organized as follows: the form and function of agreement auxiliaries as well as general implications of their study for linguistic theories and the human
205
206
II. Morphology language faculty, form the main parts of this chapter. Section 2 gives a brief overview of the forms and functions of agreement auxiliaries, in order to familiarize the reader with these specific grammatical markers. Moreover, I discuss the restrictions on the use of agreement auxiliaries in sign languages. In section 3, I introduce various sign languages that make use of agreement auxiliaries. Section 4 examines one by one a set of grammatical properties of sign language auxiliaries and considers possible implications for our understanding of sign languages as a major group of languages. In section 5, I will compare auxiliaries in sign languages to their counterparts in spoken languages. The final section summarizes the main issues addressed in this chapter.
2. Form and function of agreement auxiliaries In spoken languages, auxiliaries have many different functions, such as expressing tense, aspect, modality, and grammatical voice (genus verbi), amongst others. In addition, auxiliaries also express verbal agreement features in many spoken languages. However, the realization of agreement is usually not the main function of spoken language auxiliaries (but see the discussion of German tun (‘do’) in Steinbach and Pfau (2007)). The morphosyntactic expression of verb agreement was one of the first grammatical features of sign languages to be researched (Padden 1988). Some verbs in sign languages express agreement between subject and object (or between source and goal) on the morphosyntactic level by modifying path movement and/or hand orientation. These verbs sharing specific grammatical properties are called agreement verbs. By contrast, another set of verbs, the so-called plain verbs, does not share the property of morphosyntactic expression of agreement, and it is primarily with these verbs that agreement auxiliaries find use, as will be described below. Interestingly, the main function of agreement auxiliaries is the overt realization of verb agreement with plain verbs that cannot be modified to express agreement. Hence, agreement auxiliaries in sign languages differ from auxiliaries in spoken languages in that they are not used to express tense, aspect, modality, or genus verbi.
2.1. Functions of sign language agreement auxiliaries Agreement auxiliaries in sign languages have a different function from their counterparts in spoken languages in that their most important function is to express agreement with the subject and the object of the sentence ⫺ see example (1a) below. All of the agreement auxiliaries that have been described in sign languages accompany main verbs, as is the norm in most spoken languages, too. As stated above, sign language agreement auxiliaries usually accompany plain verbs, that is, verbs that cannot inflect for agreement. In addition, agreement auxiliaries occasionally accompany uninflected agreement verbs. Moreover, agreement auxiliaries have been found to accompany inflected agreement verbs (see example (1b) from Indopakistani Sign Language (IPSL)) thereby giving rise to constructions involving split and/or double inflection (following Steinbach/Pfau 2007). This will be described in more detail in section 4 below. Besides
10. Agreement auxiliaries
207
verbs, in many sign languages, agreement auxiliaries also accompany predicative adjectives such as proud in example (1c) from German Sign Language (DGS). While a large proportion of the attested agreement auxiliaries are semantically empty, a subset of semi-grammaticalized auxiliaries such as give-aux in example (1d) from Greek Sign Language (GSL) still have traceable roots (in this case, the main verb to give) and still carry semantic load expressing causativity or transitivity. This results in agreement auxiliaries which select for the semantic properties of their arguments and put semantic restrictions on the possible environments they occur in. Moreover, since agreement verbs usually select animate arguments, most agreement auxiliaries also select [Canimate] arguments. Example (1a) is from Massone and Curiel (2004), (1b) from Zeshan (2000), (1c) from Steinbach and Pfau (2007), and (1d) from Sapountzaki (2005); auxiliaries are in bold face. (1)
Agreement auxiliaries in different sign languages a. john1 mary2 love 1AUX2 ‘John loves Mary.’ b. leftAUX1 complete leftteach1. ‘He taught me everything completely.’ c. ix1 poss1 brother ix3a proud 1PAM3a ‘I am proud of my brother.’ d. ix2 2GIVE-AUX3 burden end! ‘Stop being a trouble/nuisance to him/her!’
[LSA] [IPSL] [DGS] [GSL]
As in spoken languages, the same forms of agreement auxiliaries may also have additional grammatical functions when used in different syntactic slots or in specific environments in sign languages. They may, for instance, also function as disambiguation markers, such as the Brazilian Sign Language (LSB) agreement auxiliary when used preverbally. The DGS auxiliary can also be used as a marker of emphasis, similar to the insertion of do in emphatic sentences in English, and the auxiliaries in Flemish Sign Language (VGT) and GSL are also markers of transitivity and causativity.
2.2. Forms and origins of agreement auxiliaries Based on the origins of agreement auxiliaries, Steinbach and Pfau (2007) have proposed a three-way distinction in their study on the grammaticalization of agreement auxiliaries: (i) indexical auxiliaries, which derive from concatenated pronouns; see the IPSL example in Figure 10.1 (note that we cannot exclude the possibility that the indexical signs went through an intermediate evolutionary stage of path/motion or transfer markers of a verbal nature); (ii) non-indexical agreement auxiliaries and semi-auxiliaries which derive from main verbs such as give, meet, go-to; see the GSL example in Figure 10.2; and (iii) non-indexical agreement auxiliaries which derive from nouns like person (see the DGS example in Figure 10.3 (the DGS auxiliary is glossed as pam, which stands for Person Agreement Marker (Rathmann 2001)).
208
II. Morphology
‘you to him/her’
‘I to him and he to me’
‘to each other’
Fig. 10.1: Indexical auxiliary derived from pronoun: aux1 (IPSL, Zeshan 2000). Copyright © 2000 by John Benjamins. Reprinted with permission.
Fig. 10.2: Non-indexical agreement auxiliary derived from verb; pictures show beginning and end point of movement: give-aux (GSL, Sapountzaki 2005)
Fig. 10.3: Non-indexical agreement auxiliary derived from noun; pictures show beginning and end point of movement: 3apam3b (DGS, Rathmann 2001). Copyright © 2001 by Christian Rathmann. Reprinted with permission.
Note that neither the first nor the third subgroup of auxiliaries is common in spoken languages since in spoken languages, auxiliaries are usually grammaticalized from verbs (i.e. subgroup (ii)). In contrast, grammaticalization of auxiliaries from nouns is rare, if it exists at all. The abundant occurrence of sign language auxiliaries that have developed from pronouns or from a paralinguistic means such as indexical gestures is also intriguing. Actually, the latter development, from pointing sign via pronoun to subject/objectagreement auxiliary, is the most common one identified in the sign languages investi-
10. Agreement auxiliaries
209
gated to date; it is attested in, for instance, GSL (Sapountzaki 2005), IPSL, Japanese Sign Language (NS) (Fischer 1992, 1996), and Taiwan Sign Language (TSL) (Smith 1989, 1990). Fischer (1993) mentions the existence of a similar agreement auxiliary in Nicaraguan Sign Language (ISN), glossed as baby-aux1, which evolved in communication amongst deaf children. Another similar marker resembling an indexical auxiliary has been reported in studies on the communication of deaf children who are not exposed to a natural sign language but either to artificial sign systems (Supalla 1991; in Fischer 1996) or no sign systems at all (Mylander/Goldin-Meadow 1991; in Fischer 1996). This set of grammaticalized indexical (pointing) auxiliaries belongs to the broader category of pronominal or determiner indexical signs, which, according to the above findings, have evolved ⫺ following universal tendencies of sign languages ⫺ from pointing gestures to a lexicalized pointing sign (Pfau/Steinbach 2006, 2011). In contrast to indexical auxiliaries, the second set of agreement auxiliaries has verbal roots. The lexical meaning of these roots can still be traced. Such auxiliaries mostly function as semi-auxiliaries and have not spread their use to all environments. They are attested in TSL (Smith 1990), GSL (Sapountzaki 2005), VGT (Van Herreweghe/ Vermeerbergen 2004), and Sign Language of the Netherlands (NGT) (Bos 1994). The third group consists at the moment of only two auxiliaries, attested in DGS and Catalan Sign Language (LSC). Both auxiliaries derive from the noun person (see Figure 10.3). In the following section, I will analyze the individual characteristics of the auxiliaries of all three groups language by language.
3. Agreement auxiliaries in different sign languages − a cross-linguistic comparison Agreement auxiliaries are part of the grammar of numerous sign languages, which will be described individually in alphabetical order in this section. Firstly, I describe the form and the syntactic properties of each auxiliary. Then I discuss the historical development of each auxiliary for all cases where there is evidence for a specific grammaticalization process. Finally, I turn to other issues that are essential for the function of each auxiliary (such as, for example, the use of non-manual features).
3.1. Comparative studies on agreement auxiliaries Agreement auxiliaries are attested in a wide range of sign languages from around the world. These agreement auxiliaries can be found in historically unrelated sign languages, and appear at different stages in the evolutionary continuum. Agreement markers explicitly described as auxiliaries appear in descriptions of the following sign languages: Argentine Sign Language (LSA) Brazilian Sign Language (LSB) Catalan Sign Language (LSC) Danish Sign Language (DSL)
Greek Sign Language (GSL) Indopakistani Sign Language (IPSL) Japanese Sign Language (NS) Sign Language of the Netherlands (NGT)
210
II. Morphology Flemish Sign Language (VGT) German Sign Language (DGS)
Taiwan Sign Language (TSL)
Initially, work was done from a language-specific perspective, analyzing agreement markers or sets of markers within a specific sign language. Recently, comparative studies also appeared in the field of sign language linguistics, shedding light on the similarities and differences of agreement auxiliaries across sign languages. The first comparative studies of agreement markers in sign languages (Engberg-Pedersen 1993; Fischer 1996; Zeshan 2000; Rathmann 2001) used findings from TSL, IPSL, DGS, and NGT combined with evidence of agreement auxiliaries in NS (which is historically related to TSL and structurally similar). The first cross-linguistic generalizations concerning auxiliaries in sign languages were already drawn in these studies: these auxiliaries, initially referred to as ‘pointing signs’, were identified as functional elements and not as pronouns that realize a specific grammatical function and are directly linked to the main verb of the sentence. But already in these studies, it became apparent how different the uses, origins, and distribution of pronominal auxiliaries may be, even in structurally related languages such as TSL and NS. More recent comparative studies by Quer (2006) and de Quadros and Quer (2008) discuss the typological status of two indexical auxiliaries in LSC and LSB, both glossed as aux-ix. Amongst other issues, these studies provide information on the syntax and inflection of agreement auxiliaries, as well as on the verbs they accompany and their distribution in various syntactic environments. However, the broadest typological study on agreement auxiliaries in sign languages to date is the one by Steinbach and Pfau (2007), which is also one of the main sources for this chapter. These authors emphasize the grammaticalization processes underlying agreement auxiliaries. The grammaticalization of auxiliaries in spoken languages is compared to the grammaticalization of agreement auxiliaries in sign languages. The authors conclude that modality-independent as well as modality-specific cognitive processes and grammaticalization paths characterize overall the grammaticalization of agreement auxiliaries in sign languages; the peculiarity of indexical pronominal sources for agreement auxiliation in sign languages, for instance, is attributed to specific properties of the visual-gestural modality.
3.2. Argentine Sign Language In their work on sign order in LSA, Massone (1994) and Massone and Curiel (2004) compare the articulatory nature of pronouns and of an indexical agreement auxiliary; they conclude that morphologically, pronoun copy differs from a transitive auxiliary aux. The auxiliary almost always appears in sentence-final position (2a). However, when it is combined with an agreement verb, it may appear in a preverbal position (2b), and when it is used in interrogative clauses with an overt sentence-final interrogative pronoun, aux precedes the wh-pronoun (2c). Its function is restricted to the expression of agreement, while its form indicates that it is grammaticalized from two concatenated pronouns. The auxiliary is produced with a “smooth hold followed by a curved movement between two different loci in the signing space, also ending with a smooth hold” (Massone 1993). By contrast, a pronoun copy still retains more specific
10. Agreement auxiliaries
211
beginning and end points of each pronoun, thus being grammaticalized to a lesser extent. (2)
Argentine Sign Language indexical auxiliary a. bob ix1 send-letter 3aux1 ‘Bob sends me a letter.’ b. 3aaux3b say-yes ‘He says yes to her.’ _______________wh c. ix2 say3 2aux3 what ‘What did you tell him/her?’
[LSA]
3.3. Flemish Sign Language In VGT, the agreement auxiliary glossed as give (geven in Dutch) is phonologically similar to the VGT main verb meaning ‘to give’ (Van Herreweghe/Vermeerbergen 2004). give needs two animate arguments and it tends to appear in reversible sentences where subject/source and object/goal can occupy interchangeable syntactic slots (3a), as long as the movement path of the auxiliary is from subject/source to object/goal. It functions as a semi-auxiliary, marking subject/object-agreement as well as causativity, a semantic property which can be traced back to its lexical source to give (see (3) below). Its lexical source and the grammaticalization process are apparently still visible in its present form. (3)
Flemish Sign Language semi-auxiliary give a. girl give boy hit ‘The girl hits the boy.’ b. man give dog caress ‘The man is caressing the dog’.
[VGT]
3.4. German Sign Language The person agreement marker in DGS has been analyzed in several studies. The first study on this marker is the one by Keller (1998), where it was glossed as auf-ix because it used to be accompanied by a mouthing related to the German preposition auf (‘on’). Phonologically, the auxiliary is similar to the sign for person. Rathmann (2001) glossed this auxiliary as pam (Person Agreement Marker), a gloss that hints at its phonological form as well as its morphosyntactic function in DGS. In this study, pam was described as a marker which mainly occupies a preverbal position (its postverbal position had not been discussed prior to this) and has the ability to inflect for singular, dual, and distributional plural. The postverbal use of pam in (4a) is described in Steinbach and Pfau (2007), who argue that the syntactic distribution of pam in DGS is subject to
212
II. Morphology dialectal variation. Rathmann (2001) was the first to describe this marker as an agreement auxiliary, which is used in association with verb arguments that refer to animate or human entities. pam can inflect for number and person. Rathmann argues that the use of pam with specific main verbs is subject to certain phonological constraints, that is, it is used primarily with plain verbs such as like in (4a), but it also complies with semantic criteria, in that the use of pam may force an episodic reading (4c). Besides plain verbs, pam can also be used with adjectival predicates such as proud in (4b), which do not select source and goal arguments, that is, with predicates that do not involve the transition of an object from a to b. Rathmann claims that pam, unlike most agreement verbs, does not express agreement with source and goal arguments but rather with subject and direct object. Interestingly, when used with uninflected backward verbs such as invite, pam does not move from the position of the source to the position of the goal but from the position of the subject to the position of the object (cf. also Steinbach/Pfau 2007; Pfau et al. 2011; Steinbach 2011). Hence, pam has developed into a transitivity marker which is not thematically (source/goal) but syntactically restricted (subject/object). Note finally that with plain verbs, pam can also be used as a reciprocal marker (Pfau/Steinbach 2003). Examples (4a) and (4b) are from Steinbach and Pfau (2007, 322), (4c) is from Rathmann (2001). (4)
German Sign Language auxiliary pam [DGS] a. mother ix3a neighbor new ix3b like 3apam3b ‘My mother likes Mary.’ b. ix1 poss1 brother ix3a proud 1pam3a ‘I am proud of my brother.’ c. son2 mother1 5-years 1pam2 teach ‘A mother has been teaching her son for 5 years.’ (episodic reading) ?? ‘A mother used to teach her son for 5 years.’ (generic reading)
3.5. Greek Sign Language GSL has two different agreement auxiliaries. Firstly, there is some evidence for an indexical agreement auxiliary ix-aux, although it does not occur frequently in spontaneous data (Sapountzaki 2005). As in other sign languages where indexical auxiliaries are observed, the movement of the one-handed auxiliary starts with the finger pointing towards the subject locus and ends with the finger pointing towards the object locus, the movement being a smooth path from subject to object. In addition, ix-aux appears in a reciprocal form meaning ‘transmitting to each other’: in fact, the GSL sign usually glossed as each-other seems to be no more than the inflected, reciprocal form of ixaux. The reciprocal form can also appear with strong aspectual inflection (progressive or repetitive). It can be used with the verbs telephone, fax, help, and communicatethrough-interpreter. Interestingly, all of the verbs of transmission of information, which typically combine with the GSL ix-aux, are by default agreement verbs in GSL, which does not support the argument that the evolution of an indexical agreement auxiliary covers an ‘agreement gap’ in grammar. A hypothesis is that this indexical
10. Agreement auxiliaries sign selects only verbs that semantically relate to ‘transmission of message’. However, there is not enough evidence to support this hypothesis further at this point. Secondly, a non-indexical semi-auxiliary marking agreement is also used in GSL. It is glossed as give-aux. In terms of grammatical function, its role is to make an intransitive mental state verb, such as feel-sleepy, transitive and, in addition, to express the causation of this state. Occasionally, it may combine with atelic verbs of activity like sing, suggesting that the use of give-aux is expanding to atelic, body-anchored verbs, in addition to plain verbs of mental or emotional state, which typically are also bodyanchored. It appears that the criteria for selecting the verbs that combine with giveaux are both semantic and structural in nature. Usually (but not always, for example see (5b) below) give-aux appears in structures including first person (non-first to first, or first to non-first). The auxiliary may inflect for aspect, but it is more common for the main verb to carry aspectual inflection, while the auxiliary only carries the agreement information (5d). (5)
Greek Sign Language non-indexical auxiliary give-aux [GSL] a. deaf in-grouploc:c sign-too-much 3give-aux1 get-overwhelmed ‘Deaf who are too talkative make me bored and overwhelmed.’ b. ix2 2give-aux3 burden end! ‘Stop being a trouble / nuisance to him/her!’ c. ix2 sign++ stative 2give-aux1 get-overwhelmed, end! ‘Stop signing the same things again and again, you are getting very tiresome to me!’ d. ix1 sea all-in-front-of-me sit, what? 3give-aux1 be-calm. ‘Sitting by the sea makes me calm.’
3.6. Indopakistani Sign Language The IPSL agreement auxiliary, which is glossed as aux (or ix in some earlier studies; e.g. Zeshan 2000) is similar to the indexical GSL auxiliary discussed in the previous section. The IPSL auxiliary has the phonological form of an indexical sign with a smooth movement between two or more locations, with the starting point at the locus linked to the source of the action and the end point(s) at the locus or loci linked to the goal(s) of the action. It is thus used to express spatial agreement with the source and goal arguments, as is illustrated in (6a) and (6b). Its sentence position varies, depending on whether the main verb it accompanies is a plain verb or an agreement verb. Generally, the auxiliary occupies the same syntactic slot as the indexical sign ix in its basic localizing function, that is, immediately before or after the (non-agreeing) verb. When used with plain verbs, the auxiliary immediately follows the predicate (6c). When accompanying an agreement verb, the auxiliary may precede and/or follow the main verb and thus may be used redundantly, yielding structures with double (6a) or even multiple markings of agreement (6b). It can also stand alone in an elliptical sentence (6d) where the main verb is known from the context. In this case, it is usually associated with communicative verbs (say, tell, talk, inform, amongst others). Finally, and similar to the GSL auxiliary ix-aux, it can also express reciprocity. aux is a verbal
213
214
II. Morphology functional element, which is semantically empty. In sum, it is a fully grammaticalized auxiliary verb that accompanies main verbs. (6)
Indopakistani Sign Language indexical auxiliary a. leftaux1 all complete leftteach1. ‘He taught me everything completely.’ b. sign work 1aux0 0aux3b 3baux0 0aux1 1aux0 0both3b ‘I discuss the matter via an interpreter.’ q c. understand 2aux1? ‘Do you understand me?’ d. yasin rightaux1 deaf little end. ‘Yasin told me that there are few deaf people.’
[IPSL]
3.7. Japanese Sign Language Fischer (1996) provides evidence of an indexical auxiliary used in NS. Like aux-1 in TSL, which will be discussed below, aux-1 in NS seems to be a smoothed series of indexical pronouns (pronoun copy is a common phenomenon in NS, much more than in American Sign Language (ASL)). aux-1 has phonologically assimilated the phonological borders of the individual pronouns, that is, their beginning and end points. Its sentence position is more fixed than that of pronouns. It does not co-occur with certain pronoun copy verbs and is not compatible with gender marking. All these verb-like properties show that aux-1 in NS is a grammaticalized person agreement marker and that it functions as an agreement auxiliary, as illustrated in (7) (Fischer 1996, 107). (7)
Japanese Sign Language indexical auxiliary child3a teacher3b like 3aaux-13b ‘The child likes the teacher.’
[NS]
3.8. Sign Language of the Netherlands Inspired by studies on agreement auxiliaries in TSL, Bos (1994) and Slobin and Hoiting (2001) identified an agreement auxiliary in NGT, glossed as act-on. The grammatical function of this auxiliary is to mark person agreement between first and second person or between first and third and vice versa; see example (8). act-on accompanies verbs selecting arguments which are specified for the semantic feature [Chuman]. The position of act-on in the sentence is not absolutely fixed, although in more than half of the examples analyzed, act-on occupies a postverbal position. In elliptical sentences, it can also stand alone without the main verb. Historically, act-on seems to be derived from the main verb go-to (Steinbach/Pfau 2007), but unlike go-to, act-on is often accompanied by the mouthing /op/, which corresponds to the Dutch preposition op
10. Agreement auxiliaries
215
(‘on’), although act-on is not always used in contexts where the preposition op would be grammatically correct in spoken Dutch. In the Dutch equivalent of (8), for instance, the preposition op would not be used. As for the source, an alternative analysis would be that act-on is an indexical auxiliary, that is, that it is derived from two concatenated pronouns, just like the auxiliaries previously discussed (8)
Sign Language of the Netherlands auxiliary act-on ix1 partner ix3a love 3aact-on1 ‘My boyfriend loves me.’
[NGT]
Bos (1994) found a few examples where both the main verb and act-on agree, that is, instances of double agreement marking. Consequently, she argues that agreement verbs and agreement auxiliaries are not mutually exclusive. In other words, act-on can combine with an already inflected agreement verb to form a grammatical sentence. Just like agreement verbs, act-on marks subject and object agreement by a change in hand orientation and movement direction. However, unlike agreement verbs, it has no lexical meaning, and its function is purely grammatical, meaning ‘someone performs some action with respect to someone else’. Remember that act-on might have developed from either a spatial verb or pronouns. According to Bos, act-on is distinct from NGT pronouns with respect to manner of movement (which is rapid and tense); also, unlike indexical auxiliaries derived from pronouns, act-on does not begin with a pointing towards the subject. Although it cannot be decided with certainty whether act-on is derived from two concatenated pronouns or from a verb, the latter option seems to be more plausible. This brings us back to the question of grammaticalization in the systems of sign languages. In both studies on act-on, reference is made to the accompanying mouthing (a language contact phenomenon), suggesting that the sign retains some traces of its lexical origin. In other sign languages, such as DGS, the initial use of mouthing with the agreement auxiliary (pam) has gradually decreased, so that the DGS auxiliary is currently used without mouthing (i.e. in a phonologically reduced form), thus being grammaticalized to a greater extent (Steinbach/Pfau 2007). Trude Schermer (p.c.) suggests that the NGT auxiliary is presently undergoing a similar change.
3.9. Taiwan Sign Language Smith (1990, 1991) is the first detailed discussion of agreement auxiliaries in a sign language. He focuses on TSL and describes which properties the TSL auxiliaries share with other auxiliaries cross-modally (Steele 1978). The three TSL auxiliaries serving as subject/object-agreement markers are glossed as aux-1, aux-2, and aux-11, based on their function (auxiliary) and their phonological form: (i) aux-1 is indexical, using the handshape conventionally glossed as ‘1’ (@); (ii) aux-2 is identical to the TSL verb see, using the handshape glossed as ‘2’ (W); and (iii) aux-11 is phonologically identical to the two-handed TSL meet, performed with two ‘1’ handshapes (glossed as ‘11’). The use of aux-11 is illustrated in (9).
216
II. Morphology (9)
Taiwan Sign Language non-indexical auxiliary aux-11 top a. that vegetable, index1 1aux-113 not-like ‘I don’t like that dish.’ b. 3aaux-113b-[fem] teach3b-[fem] ‘He/she teaches her.’
[TSL]
TSL agreement auxiliaries differ from verbs in syntax: they most often appear in a fixed position before the main verb. They are closely attached to the main verb and mark person, number, and gender, but not tense, aspect, or modality. In (9b), gender is marked on the non-dominant hand by a N-handshape (Smith 1990, 222). Usually, the auxiliaries co-occur with plain verbs or with unmarked forms of agreement verbs. In (9b), however, both the lexical verb and the auxiliary are marked for object agreement (and the auxiliary in addition for subject agreement). Historically, the auxiliaries have developed from different sources. As mentioned above, aux-1 might result from a concatenation of pronouns, while aux-2 and aux-11 are phonetically identical to the TSL verbs see and meet, respectively, and seem to derive from ‘frozen’ uninflected forms of these verbs. They all seem to have proceeded along a specific path of grammaticalization and have lost their initial lexical meanings, as is evident from the examples in (9).
3.10. Sign languages without agreement auxiliaries So far, we have seen that a number of unrelated sign languages employ agreement auxiliaries to express verb agreement in various contexts. However, this does not necessarily mean that agreement auxiliaries are modality-specific obligatory functional elements that can be found in all sign languages. Actually, quite a few sign languages have no agreement auxiliaries at all. ASL, for example, does not have dedicated agreement auxiliaries (de Quadros/Lillo-Martin/Pichler 2004). Likewise, British Sign Language (BSL), like ASL, distinguishes between agreement verbs and plain verbs but has not developed a means to express agreement with plain verbs (Morgan/Woll/Barrière 2003). For ASL, it has been argued that non-manual markers such as eye-gaze are used to mark object agreement with plain verbs (cf. Neidle et al. 2000; Thompson/ Emmorey/Kluender 2006). In the case of young sign languages, agreement as an inflectional category may not even exist, such as is the case in the Al-Sayyid Bedouin Sign Language (ABSL), used in the Bedouin community of Al-Sayyid in the Negev in Israel (Aronoff et al. 2004).
4. Properties of agreement auxiliaries in sign languages 4.1. Inflection carried by agreement auxiliaries In sign languages, the grammatical expression of agreement between the verb and two of its arguments is restricted to a specific group of verbs, the so-called agreement verbs.
10. Agreement auxiliaries In some sign languages, agreement auxiliaries take up this role when accompanying plain verbs, which cannot inflect for subject/object-agreement. When pam accompanies an agreement verb, the latter usually does not show overt agreement (Rathmann 2001). Equally clear-cut is the distribution of agreement auxiliaries in many sign languages. In LSB, the indexical agreement auxiliary does usually combine with plain verbs, but when the same (indexical) form accompanies an agreement verb, the auxiliary takes over the function of a subject/object-agreement marker and the agreement verb remains uninflected. Interestingly, in LSB, in these cases the sentential position of the marker is different (preverbal, instead of postverbal), possibly indicating a different grammatical function of the auxiliary. In some sign languages (e.g. DGS), double inflection of both the main verb and the agreement auxiliary is possible. Such cases are, however, considered redundant, that is, not essential for marking verb agreement. Possibly, double agreement serves an additional pragmatic function like emphasis in this case (Steinbach/Pfau 2007). However, there are exceptions to this tendency, as in some other sign languages, such as IPSL or LSC, agreement auxiliaries commonly accompany agreement verbs, either inflected or uninflected, without any additional pragmatic function (Quer 2006; de Quadros/Quer 2008). In contrast, in other sign languages, such as, for example, GSL and NS, examples of double agreement are reported to be ungrammatical (Fischer 1996). A related issue is the semantics of the auxiliary itself, and the semantic properties of its arguments in the sentence. Most auxiliaries that evolved from indexical (pronominal) signs are highly grammaticalized, purely functional, and semantically empty elements. The movement from subject to object may go back to a gesture tracing the path of physical transfer of a concrete or abstract entity from one point in the sign space to another. The grammaticalized agreement auxiliary expresses the metaphorical transfer from the first syntactic argument to the second one. Although in sign languages transfer from a point x to a point y in topographic sign space is commonly realized by means of classifiers, which carry semantic information about the means of or the instrument involved in this transfer (see chapter 8 for discussion), the movement of a semantically empty indexical handshape can be seen as the result of a desemanticization process in the area of the grammatical use of the sign space. While in some sign languages, agreement auxiliaries are fully functional elements that may combine with a large set of verbs, in other sign languages, agreement auxiliaries cannot accompany main verbs of all semantic groups. Take, for example, the GSL ix-aux that only accompanies verbs expressing transmission of a metaphorical entity, like send-fax or telephone (Sapountzaki 2005). In NGT, TSL, and LSB, agreement auxiliaries may combine with main verbs of any semantic group but require their arguments to be specified as [Chuman] or at least [Canimate]. The ability of agreement auxiliaries to inflect for aspect, as well as their ability to inflect for person, also varies amongst sign languages. In sign languages, various types of aspectual inflection are usually expressed on the main verb by means of reduplication and holds (see chapter 9 for discussion). In auxiliary constructions, aspectual inflection is still usually realized on the main verb ⫺ in contrast to what is commonly found in spoken languages. In LSB, for instance, aux-ix cannot inflect for aspect. The same holds for pam in DGS. However, in a few sign languages, agreement auxiliaries can express aspectual features (e.g. GSL give-aux). Similarly, in some sign languages, agreement auxiliaries do not have a full person paradigm. GSL give-aux has a strong
217
218
II. Morphology preference to occur in first person constructions while in sentences with non-first person subject and object, ix-aux is usually used. Thus, in GSL, the distribution of ix-aux and give-aux seems to be complementary. Finally note that some of the agreement auxiliaries, such as pam in DGS, ix-aux and give-aux in GSL, and aux in IPSL, can also be used in reciprocal constructions. The reciprocal form of the agreement auxiliaries may either be two-handed ⫺ both hands moving simultaneously in opposite directions ⫺ or one-handed ⫺ in this case, the reciprocal function is expressed by a sequential backward movement.
4.2. Syntactic position In syntax, agreement auxiliaries show a considerable amount of variation. The position of the DGS auxiliary pam appears to be subject to dialectal variation as it may occupy either a preverbal (post-subject; Rathmann 2001) or a postverbal position (Steinbach/ Pfau 2007). By contrast, LSA aux and LSB aux-ix usually occupy the sentence-final position. In GSL and TSL, indexical agreement auxiliaries are attested in two different positions, preverbal or sentence-final in GSL and sentence-initial or preverbal in TSL. Unlike in DGS, in GSL this variation is not dialectal. In some sign languages, the function of the agreement auxiliary may vary with the syntactic position. The LSB auxiliary can, for example, appear in a preverbal position but with a different grammatical function. In this position, it is used as a disambiguation marker. While the GSL indexical agreement auxiliary ix-aux occupies the sentence-final or at least post-verbal position, give-aux appears immediately preverbal. The reason for this distribution may be articulatory: in most uses of give-aux, the end point of the movement is the signer (i.e. position ‘1’). Since the auxiliary only accompanies body-anchored signs, the order body-anchored main verb after agreement auxiliary seems to be more optimal. Other parameters may also play a role in determining the syntactic position of agreement auxiliaries. The indexical agreement auxiliary in IPSL, for instance, occupies the sentence-final position when the main verb is plain, while it has a more flexible distribution when the main verb is an agreeing verb. Concerning syntactic structure, sign languages can be divided in two types: (i) sign languages which are specified as [Caux] like the sign languages with agreement auxiliaries described in this chapter and (ii) [⫺aux] languages, like ASL or BSL, which do not have agreement auxiliaries. According to Rathmann (2001), only [Caux] languages project agreement phrases where the agreement auxiliary checks agreement features (note that Rathmann uses pam as a general label for agreement auxiliaries and thus distinguishes between [Gpam]).
4.3. Non-manual features Mouthing ⫺ an assimilated cross-modal loan of (a part of) a spoken word (see chapter 35 for discussion) ⫺ is a phenomenon that not all of the studies on agreement auxiliaries address. In at least one case, that is, the NGT agreement auxiliary act-on, mouthing of the Dutch preposition op is still fairly common (at least for some signers)
10. Agreement auxiliaries and can be considered as an integral part of the lexical form of the auxiliary. However, according to recent studies at the Dutch Sign Centre (Nederlands Gebarencentrum), use of mouthing is gradually fading. A similar process has previously been described for the DGS auxiliary pam, which has lost its accompanying mouthing /awf/. This process can be considered as an instance of phonological reduction. Moreover, in DGS, the mouthing associated with an adjacent verb or adjective may spread over pam, thus suggesting the development of pam into a clitic-like functional element. In GSL, the non-indexical auxiliary give-aux, unlike the phonologically similar main verb give, is not accompanied by a mouthing. Besides its specific syntactic position, which is different from that of the main verb, it is now recognized as an agreement auxiliary because it is used without mouthing, a fact that further supports the hypothesis of ongoing grammaticalization of agreement auxiliaries. Another interesting issue for theories of grammaticalization is the source of the mouthings accompanying act-on and pam in NGT and DGS respectively. The mouthing of the corresponding Dutch and German prepositions op and auf can either be analyzed as a cross-modal loan expression or as a Creole neologism taken from a language of a different (oral) modality into a sign language. In both languages, the prepositions are used with one-place predicates such as wait or be proud to mark objects (i.e. Ich warte auf dich, ‘I am waiting for you’). Hence, the use of the agreement auxiliaries in NGT and DGS corresponds to some extend to the use of the prepositions in Dutch and German (i.e. ix wait pam, ‘I am waiting for you’). However, the use of the auxiliaries and the accompanying mouthings in NGT and DGS does not exactly match the use of the prepositions op and auf in Dutch and German (i.e. ix laugh pam ⫺ Ich lache über/*auf dich, ‘I laugh at you’). Moreover, although apparently both prepositions do not function as auxiliaries in Dutch and German, the semantics of a preposition meaning on nevertheless fits the semantic criteria for agreement auxiliary recruitment, that is, the motion and/or location schemas proposed by Heine (1993).
4.4. Degree of grammaticalization As mentioned above, agreement auxiliaries have developed from three different sources: (i) pronouns, (ii) verbs, and (iii) nouns. Indexical agreement auxiliaries are generally grammaticalized to a high degree. An example of a fully grammaticalized agreement marker is the TSL auxiliary aux-1, its NS and IPSL counterparts, and the LSB auxiliary aux-ix, all of which have initially evolved from indexical signs. In their present stage, they are semantically empty function words ⫺ they are reported to have no meaning of their own and they only fulfill a grammatical function in combination with a main verb. They can accompany many different verbs in these sign languages, and their position can be predicted with some accuracy; in most cases, they immediately precede or follow the main verb. Still, we also find some cases of indexical agreement auxiliaries which are not fully grammaticalized: they do not inflect freely for person and they select only arguments which are specified for the semantic feature [Chuman]. Moreover, the IPSL agreement auxiliary exhibits selectional restrictions on the verbs it accompanies, as it is usually associated with communicative verbs meaning ‘say’, ‘tell’, ‘talk’, or ‘inform’ (Zeshan, p.c.).
219
220
II. Morphology Non-indexical agreement auxiliaries generally show a lower degree of grammaticalization. The GSL auxiliary give-aux has developed from the main verb give. Although it is not yet fully grammaticalized, there is clear evidence for this grammaticalization path. Like the main verb, it requires a human recipient of an action; also, it inherited the source/goal argument structure of the main verb. However, give-aux does not inflect freely for agreement, it only combines with certain classes of verbs and its use is less systematic and less frequent than the use of the other auxiliary in GSL. Hence, it is not yet a fully grammaticalized agreement marker. Although there is no historical relation between GSL and VGT, a very similar auxiliary is also found in VGT (see examples in (3) above). The VGT auxiliary give acts as an agreement marker between actor and patient, both specified as [Canimate]. Moreover, give appears not to be fully grammaticalized, as it has, for example, selective restrictions on its two arguments. In both form and function it thus resembles the GSL auxiliary give-aux. GSL give-aux and VGT give comply with the criteria on low grammaticalization proposed by Bybee (1994), Heine (1993), and Heine and Kuteva (2002): (i) selectivity of a marker with respect to the groups of verbs it combines with, (ii) low ability for inflection, and (iii) synchronic use with an identical lexical form. In addition, these markers still carry a significant amount of semantic content and, due to semantic restrictions, are not used as frequently as indexical auxiliaries in a sign language. Finally, these markers express more than only agreement in that they also convey causativity and change-of-state. This shows that the grammaticalization of these non-indexical agreement auxiliaries has not reached the end of grammaticalization continuum. However, it is not the case that all non-indexical auxiliaries show a lesser degree of grammaticalization. In some sign languages, non-indexical agreement auxiliaries can also be fully grammaticalized, as is, for example, the case in TSL and DGS, where the non-indexical markers appear to be highly grammaticalized. Consequently, the grammaticalization patterns of non-indexical agreement auxiliaries vary from language to language, but overall they show a somewhat lower degree of grammaticalization than the indexical ones. This is evidenced by a narrower grammatical distribution: nonindexical auxiliaries may not have a full inflectional paradigm (GSL), may not combine with all semantic groups of arguments (GSL, LSB, VGT, and DGS), may express more than one grammatical function (VGT and GSL), and may show an overall higher semantic load and light-verb characteristics. In the next section, the discussion of shared linguistic properties of agreement auxiliaries in sign languages is expanded to include auxiliaries in spoken languages.
5. Grammaticalization of auxiliaries across modalities 5.1. Grammaticalization of gestures: the notion of transfer Universally, the main function of grammaticalization as a cognitive mechanism is the “exploitation of old means for novel functions” (Werner/Kaplan 1963, 403; cited in Heine/Traugott 1991, 150). Following this reasoning, one may argue that sign languages needed some grammatical means to express grammatical categories such as verb agree-
10. Agreement auxiliaries ment. However, this view does not provide us with sufficient answers to the question why grammaticalization occurs in all languages, and why grammaticalized elements often co-occur with other devices that express the same meaning (Heine/Claudi/Hühnemeyer 1997, 150; Heine/Traugott 1991). Borrowing of linguistic tokens might be an alternative means of incorporating elements that fulfill novel functions but, apparently, the driving forces of borrowing are not always adequate in practice cross-linguistically (Sutton-Spence 1990; cited in Sapountzaki 2005). A major issue in the evolution of sign language auxiliaries is the fact that some of them are not simply grammaticalized from lexical items, but have evolved from a nonlinguistic source, that is, gestures. Indeed, strictly following the terminology of spoken language linguistics, gestures cannot be considered as a lexical source that is the basis of grammaticalization. According to Steinbach and Pfau (2007), agreement in sign languages has a clear gestural basis (see also Wilcox (2002), Pfau/Steinbach (2006, 2011), and Pfau (2011) on the grammaticalization of manual and non-manual gestures). In sign languages, gestures can enter the linguistic system, either as lexical elements or as grammatical markers (also see chapter 34 on grammaticalization). Some of these lexicalized gestures such as the index sign index can further develop into auxiliaries. As mentioned at the beginning of this chapter, the most common assumption for (Indo-European) spoken languages is that auxiliaries derive from verbs (e.g. English will, may, shall, do). Irrespective of this apparent difference, however, there are common cognitive forces, such as the concept of transition from one place to another, which is a common source for grammaticalization in both modalities. Many spoken languages, some belonging to the Indo-European family and some not, use verbs such as ‘go’, ‘come’, or ‘stay’ as auxiliaries (Heine 1993; Heine/Kuteva 2002). Similarly, tracing a metaphorical path from the subject/agent to the object/goal, for example, is quite common in many sign languages. This is just another realization of the same concept of transition, although this spatial concept is realized in a modality-specific way in the sign space. Thus, the spatial concept of transition from a to b is grammatically realized by gestural means in sign languages, with the use of agreement verbs or agreement auxiliaries. In the case of most agreement verbs, the physical movement between specific points in space either represents transfer of a concrete object (such as in the case give) or transfer of an abstract entity such as information (as in the case of verbs of communication, e.g. explain). Finally, in the case of agreement auxiliaries this basic concept of transition may be even more abstract since agreement auxiliaries may denote transfer of virtually any relation from subject to object, that is, they denote transfer in a grammatical sense (Steinbach/Pfau 2007; cf. also Steinbach 2011).
5.2. Semantic emptiness and syntactic expansion Two essential criteria for complete grammaticalization are the semantic emptiness of a grammaticalized item and its syntactic expansion. Applying these criteria to sign languages, the pointing handshape of indexical auxiliaries can be analyzed as a reduced two-dimensional index, which carries as little visual information as possible, in order to denote motion between two or more points in space. In accordance with the second criterion, that is, syntactic expansion, agreement auxiliaries again express grammar in a physically visible form in sign languages. ‘Syntax’ is a Greek word with the original
221
222
II. Morphology meaning of ‘grouping entities in an order’, and agreement auxiliaries in sign languages, when syntactically expanded, do visibly exactly this, that is, on the very physical level of articulation, they perform motor-visual links between points in the sign space in front of the signer’s body, a peculiarity unique to sign languages. In sign languages, syntactic expansion of the use of an agreement auxiliary is then realized as maximization of path tracings in the sign space, with minimal movement constraints. Moreover, indexical agreement auxiliaries in sign languages originate from reference points in space and are linked to the concept of transfer in real space. Overall, however, the use of indexical auxiliaries is functional to a higher degree than that of auxiliaries that originate from verbs or nouns in sign languages. On the other hand, lexical origins still remain transparent in the respective nonindexical auxiliaries. Auxiliaries derived from verbs (such as give, see, meet) comply with universals of grammaticalization. But even in the case of pam, which derives from the noun person, the handshape and movement of the underlying noun provide an essential phonological and semantic basis for the grammaticalization of the agreement auxiliary (Rathmann/Mathur 2002; Steinbach/Pfau 2007). Cross-linguistically, it is typical for verbs denoting transfer (such as give or send) to grammaticalize into markers of transitivity. Interestingly, typologically different languages like Mandarin Chinese (an isolating language) and GSL and VGT use auxiliaries drawn from a lexical verb meaning ‘give’ (Ziegeler 2000; Van Herreweghe/Vermeerbergen 2004; Sapountzaki 2004, 2005). Moreover, the use of a main verb meaning ‘go to’ as the source for the NGT auxiliary act-on and the use of a verb meaning ‘meet’ as the source of TSL aux-11 can be explained if we apply the Motion Schema proposed as a general (modality-independent) schema for the grammaticalization of auxiliaries (Heine 1993). By contrast, there is no direct equivalent in spoken languages that corresponds to the use of see as the source of the TSL auxiliary aux-2. Steinbach and Pfau (2007) point out that the verb see could be included in the category of “a proposition involving mental process or utterance verbs such as ‘think’, ‘say’, etc.” (Heine 1993, 35). The following example from Tonga, a Bantu language of Zambia, illustrates the grammaticalization of an auxiliary from a mental state verb in spoken languages. In (10), the verb yeeya (‘to think’) has developed into an auxiliary marking future tense (Collins 1962; cited in Heine 1993, 35). (10)
Joni u-yeeya ku-fwa John 3.sg-think inf-die ‘John is about to die.’ (or: ‘John will die.’)
[Tonga]
It would not come as a surprise that in sign languages, whose users perceive language visually, verbs like see are linked to mental events in a more direct way than in spoken languages. Thus, see may be used in sign languages within the mental process event schema as an optimal source for the grammaticalization of auxiliaries. Moreover, the TSL verb see belongs to the group of agreement verbs and can therefore more readily grammaticalize into an agreement auxiliary. Note finally that in most sign languages, mental state verbs are usually body-anchored plain verbs, articulated on or close to the (fore)head. Consequently, typical mental process verbs such as think are not as available for carrying agreement as auxiliaries.
10. Agreement auxiliaries
223
5.3. Frequency of occurrence as a criterion of grammaticalization The issues of syntactic expansion and of use in different syntactic environments are linked to the issue of frequency of use of auxiliaries. One can hypothesize that agreement marking by free functional morphemes in sign languages may not be as developed as in the domains of aspect and modality. According to cross-linguistic evidence on auxiliaries, aspectual auxiliaries are the most frequent and thus the most developed auxiliaries, whereas agreement auxiliaries are the least frequent and thus the least developed ones ⫺ and also the ones with the lowest degree of grammaticalization in a wide sample of spoken languages examined by Steele (1981). The examples discussed in this chapter show, however, that agreement auxiliaries are used abundantly in sign languages and that in many different sign languages, agreement auxiliaries are already highly grammaticalized functional elements. The following table sums up the properties and distribution of agreement auxiliaries in the sign languages discussed in this chapter (this is an extended version of a table provided in Steinbach/Pfau 2007). Tab. 10.1: Properties of agreement auxiliaries across sign languages
LSA LSB
N
source
aspectual marking
double agr?
reciprocal marking?
sentence position
1
pronouns
on verb
yes
??
sf
on verb
sf > prv
on verb on verb
no ?? ??
?? ?? ??
?? ??
1
a
> prv
LSC
2
pronouns pronouns noun person
DGS
1
noun person
on verb
yes
yes (1H)
sf
VGT
1
verb give
??
??
??
prv
GSL
2
pronouns verb give
on verb on aux
IPSL
1
pronouns
on verb
no no yes
?? ?? yes (2H)
sf, prv prv sf a > si
NS
1
pronouns
on verb
no
??
sf > prv, si
NGT
1
verb go-to
on verb
yes
yes (1H)
sf
TSL
3
pronouns verb see verb meet
on verb on verb on verb
yes yes yes
yes (2H) yes (2H) yes (2H)
prv, si prv, si prv, si
a
(prv)
a
Abbreviations used in Tab. 10.1: si = sentence-initial, sf = sentence-final, prv = pre-verbal, > means “more frequent than”, 1H = one-handed, 2H = two-handed, ‘??’ indicates that no information is available. a Some signs, such as wh-signs, manual negation, or aspectual markers may follow the auxiliary.
6. Conclusion Many different sign languages across the world make use of agreement auxiliaries. These auxiliaries share many properties in terms of their phonological form, syntactic
224
II. Morphology distribution, lexical sources, and indirect gestural origins. However, some degree of variation between agreement auxiliaries in different sign languages is also attested, as would be expected in any sample of unrelated, natural languages. Based on these findings, future research with a wider cross-linguistic scope might deepen our understanding of common properties of auxiliaries in sign languages in particular (thereby including wider samples of still unresearched sign languages), as well as of similarities and differences between sign and spoken languages in general, thus shedding more light on the cognitive forces of grammaticalization and auxiliation in sign and spoken languages.
7. Literature Aronoff, Mark/Meir, Irit/Padden, Carol/Sandler, Wendy 2004 Morphological Universals and the Sign Language Type. In: Booij, Geert/Marle, Jaap van (eds.), Yearbook of Morphology. Dordrecht: Kluwer, 19⫺40. Bos, Heleen 1994 An Auxiliary Verb in Sign Language of the Netherlands. In: Ahlgren, Inger/Bergman, Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure. Papers from the Fifth International Symposium on Sign Language Research. Vol. 1. Durham: ISLA, 37⫺53. Boyes Braem, Penny/Sutton-Spence, Rachel (eds.) 2001 The Hands are the Head of the Mouth: The Mouth as Articulator in Sign Language. Hamburg: Signum. Bybee, Joan/Perkins, Revere/Pagliuca, William 1994 The Evolution of Grammar: Tense, Aspect and Modality in the Languages of the World. Chicago: University of Chicago Press. Comrie, Bernard 1981 Language Universals and Linguistic Typology: Syntax and Morphology. Oxford: Blackwell. Engberg-Pedersen, Elisabeth 1993 The Ubiquitous Point. In: Signpost 6(2), 2⫺8. Fischer, Susan D. 1992 Agreement in Japanese Sign Language. Paper Presented at the Annual Meeting of the Linguistic Society of America, Los Angeles. Fischer, Susan 1993 Auxiliary Structures Carrying Agreement. Paper Presented at the Workshop Phonology and Morphology of Sign Language, Amsterdam and Leiden. [Summary in: Hulst, Harry van der (1994), Workshop Report: Further Details of the Phonology and Morphology of Sign Language Workshop. In: Signpost 7, 72.] Fischer, Susan D. 1996 The Role of Agreement and Auxiliaries in Sign Languages. In: Lingua 98, 103⫺119. Heine, Bernd 1993 Auxiliaries: Cognitive Forces and Grammaticalization. Oxford: Oxford University Press. Heine, Berndt/Claudi, Ulrike/Hünnemeyer, Friederike 1997 From Cognition to Grammar: Evidence from African Languages. In: Givon, Talmy (ed.), Grammatical Relations: A Functionalist Perspective. Amsterdam: Benjamins, 149⫺188. Heine, Berndt/Kuteva, Tania 2002 On the Evolution of Grammatical Forms. In: Wray, Alison (ed.), The Transition to Language. Studies in the Evolution of Language. Oxford: Oxford University Press, 376⫺397.
10. Agreement auxiliaries Hopper, Paul/Traugott, Elizabeth 1993 Grammaticalization. Cambridge: Cambridge University Press. Janzen, Terry/Shaffer, Barbara 2002 Gesture as the Substrate in the Process of ASL Grammaticization. In: Meier, Richard P./Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 199⫺223. Keller, Jörg 1998 Aspekte der Raumnutzung in der Deutschen Gebärdensprache. Hamburg: Signum. Massone, Maria Ignacia 1993 Auxiliary Verbs in LSA. Paper Presented at the 2nd Latin-American Congress on Sign Language and Bilingualism, Rio de Janeiro. Massone, Maria Ignacia 1994 Some Distinctions of Tense and Modality in Argentine Sign Language. In: Ahlgren, Inger/Bergman, Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure. Durham: ISLA, 121⫺130. Massone, Maria Ignacia/Curiel, Monica 2004 Sign Order in Argentine Sign Language. In: Sign Language Studies 5(1), 63⫺93. Morgan, Gary/Barriere, Isabelle/Woll, Bencie 2003 First Verbs in British Sign Language Development. In: Working Papers in Language and Communication Science 2, 57⫺66. Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert G. 2000 The Syntax of American Sign Language. Functional Categories and Hierarchical Structure. Cambridge, MA: MIT Press. Padden, Carol 1988 The Interaction of Morphology and Syntax in American Sign Language. New York: Garland Publishing. Pfau, Roland 2011 A Point Well Taken: On the Typology and Diachrony of Pointing. In: Napoli, Donna Jo/Mathur, Gaurav (eds.), Deaf Around the World. The Impact of Language. Oxford: Oxford University Press, 144⫺163. Pfau, Roland/Salzmann, Martin/Steinbach, Markus 2011 A Non-hybrid Approach to Sign Language Agreement. Paper Presented at the 1st Formal and Experimental Advances in Sign Languages Theory (FEAST), Venice. Pfau, Roland/Steinbach, Markus 2003 Optimal Reciprocals in German Sign Language. In: Sign Language & Linguistics 6, 3⫺42. Pfau, Roland/Steinbach, Markus 2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign Languages. In: Linguistics in Potsdam 24, 5⫺98. [Available at http://www.ling.unipotsdam.de/lip/] Pfau, Roland/Steinbach, Markus 2011 Grammaticalization in Sign Languages. In: Heine, Bernd/Narrog, Heiko (eds.), Handbook of Grammaticalization. Oxford: Oxford University Press, 681⫺693. Quadros, Ronice M. de/Lillo-Martin, Diane/Pichler, Chen 2004 Clause Structure in LSB and ASL. Paper Presented at the 26. Jahrestagung der Deutschen Gesellschaft für Sprachwissenschaft, Mainz. Quadros, Ronice M. de/Quer, Josep 2008 Back to Back(wards) and Moving on: On Agreement, Auxiliaries and Verb Classes. In: Quadros, Ronice M. de (ed.), Sign Languages: Spinning and Unraveling the Past, Present, and Future. Forty-five Papers and Three Posters from the 9th Theoretical Issues in Sign Language Research Conference, Florianopolis, Brazil, December 2006. Petrópolis: Editora Arara Azul. [Available at: www.editora-arara-azul.com.br/Estudos Surdos.php]
225
226
II. Morphology Quer, Josep 2006 Crosslinguistic Research and Particular Grammars: A Case Study on Auxiliary Predicates in Catalan Sign Language (LSC). Paper Presented at the Workshop on Crosslinguistic Sign Language Research, Max Planck Institute for Psycholinguistics, Nijmegen. Rathmann, Christian 2001 The Optionality of Agreement Phrase: Evidence from Signed Languages. MA Thesis, The University of Texas at Austin. Rathmann, Christian/Mathur, Gaurav 2002 Is Verb Agreement the Same Cross-modally? In: Meier, Richard/Cormier, Kearsy/ Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 370⫺404. Sapountzaki, Galini 2004 Free Markers of Tense, Aspect, Modality and Agreement in Greek Sign Language (GSL): The Role of Language Contact and Grammaticisation. Paper Presented at the ESF Workshop Modality Effects on The Theory of Grammar: A Cross-linguistic View from Sign Languages of Europe, Barcelona. Sapountzaki, Galini 2005 Free Functional Markers of Tense, Aspect, Modality and Agreement as Possible Auxiliaries in Greek Sign Language. PhD Dissertation, Centre of Deaf Studies, University of Bristol. Slobin, Dan/Hoiting, Nini 2001 Typological and Modality Constraints on Borrowing: Examples from the Sign Language of the Netherlands. In: Brentari, Diane (ed.), Foreign Vocabulary in Sign Languages: A Cross-linguistic Investigation in Word Formation. Mahwah, NJ: Erlbaum, 121⫺137. Smith, Wayne 1989 The Morphological Characteristics of Verbs in Taiwan Sign Language. PhD Dissertation, Ann Arbor. Smith, Wayne 1990 Evidence for Auxiliaries in Taiwan Sign Language. In: Fischer, Susan/Siple, Patricia (eds.), Theoretical Issues in Sign Language Research. Vol. 1: Linguistics. Chicago: University of Chicago Press, 211⫺228. Steele, Susan 1981 An Encyclopedia of AUX: A Study In Cross-linguistic Equivalence. Cambridge: MIT Press. Steinbach, Markus 2011 What Do Agreement Auxiliaries Reveal About the Grammar of Sign Language Agreement? In: Theoretical Linguistics 37, 209⫺221. Steinbach, Markus/Pfau, Roland 2007 Grammaticalization of Auxiliaries in Sign Languages. In: Perniss, Pamela/Pfau, Roland/ Steinbach, Markus (eds.), Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 303⫺339. Thompson, Robin/Emmorey, Karen/Kluender, Robert 2006 The Relationship Between Eye Gaze and Verb Agreement in American Sign Language: An Eye-tracking Study. In: Natural Language and Linguistic Theory 24, 571⫺604. Traugott, Elizabeth/Heine, Berndt (eds.) 1991 Approaches to Grammaticalization. Vol.1: Focus on Theoretical and Methodological Issues. Amsterdam: Benjamins. Van Herreweghe, Mieke/Vermeerbergen, Myriam 2004 The Semantics and Grammatical Status of Three Different Realizations of geven (give): Directional Verb, Polymorphemic Construction, and Auxiliary/Preposition/Light Verb. Poster Presented at the 8th International Conference on Theoretical Issues in Sign Language Research (TISLR 8), Barcelona.
11. Pronouns
227
Wilcox, Sherman 2002 The Gesture-language Interface: Evidence from Signed Languages. In: Schulmeister, Rolf/Reinitzer, Heimo (eds.), Progress in Sign Language Research. In Honor of Siegmund Prillwitz. Hamburg: Signum, 63⫺81. Zeshan, Ulrike 2000 Sign Language in Indo-Pakistan: A Description of a Signed Language. Amsterdam: Benjamins. Ziegeler, Debra 2000 A Possession-based Analysis of the ba-construction in Mandarin Chinese. In: Lingua 110, 807⫺842.
Galini Sapountzaki, Volos (Greece)
11. Pronouns 1. 2. 3. 4. 5.
Pronouns in spoken languages and sign languages Personal pronouns Proforms Conclusion Literature
Abstract The term ‘pronoun’ has been used with spoken languages to refer not only to personal pronouns ⫺ i.e. those grammatical items than ‘stand for’ nouns or noun phrases ⫺ but also to ‘proforms’, including words such as demonstratives, indefinites, interrogative pronouns, relative pronouns, etc. In sign languages, pronominal systems have been identified at least as far back as the mid-1970s (e.g., Friedman 1975 for American Sign Language). Since then, the term ‘pronoun’ has been widely used to refer to signs in various sign languages which have the function of personal pronouns ⫺ that is, deictic/ pointing signs which refer to signer, addressee, and non-addressed participants. As with spoken languages, the term has also been extended to refer to proforms such as indefinites, interrogatives, and relative pronouns. This chapter describes personal pronouns and proforms in sign languages, their relationships (or possible relationships) to each other, and how these relationships compare to pronouns/proforms in spoken languages.
1. Pronouns in spoken languages and sign languages The traditional definition of a pronoun is that it ‘stands for’ or ‘takes the place of’ a noun (or more specifically, noun phrase) (Bhat 2004). However, the term ‘pronoun’
228
II. Morphology has been used traditionally to refer to various types of words in spoken languages, including not only personal pronouns but also words such as demonstratives, indefinites, interrogative pronouns, relative pronouns, etc. Some of these fit the traditional definition better than others. Interrogatives, demonstratives, indefinites, and relative pronouns for instance can stand for lexical categories other than nouns. Also, while these latter examples do have various deictic and/or anaphoric uses, they ‘stand for’ nouns/noun phrases much less clearly than personal pronouns do. For this reason, Bhat (2004) refers to non-personal pronouns such as demonstratives, indefinites, reflexives, and interrogatives collectively as ‘proforms’. Various types of personal pronouns and proforms are related to each other in different ways. Some types of proforms are phonologically identical to other types (e.g. relative pronouns and demonstrative pronouns in some languages; indefinite pronouns and interrogative pronouns in others), and the affinities vary across languages (Bhat 2004). Pronominal systems have been identified in sign languages such as American Sign Language (ASL) at least as far back as the mid-1970s (Friedman 1975). Since then, the term ‘pronoun’ has been widely used to refer to signs in various sign languages which have the function of personal pronouns ⫺ that is, deictic/pointing signs which refer to signer, addressee, and non-addressed participants. As with spoken languages, the term has also been extended to refer to other categories such as indefinites, interrogatives, and relative pronouns. Here, I follow the terminology used by Bhat (2004) in distinguishing personal pronouns referring to speech act participants from proforms (including indefinites, interrogatives, and relative pronouns), with the term ‘pronoun’ as a superordinate category subsuming both personal pronouns and proforms. Thus in this chapter, the term proform is used to refer to pronouns other than personal pronouns, including reflexive pronouns, relative pronouns, reciprocal pronouns, indefinites, interrogatives, and demonstratives. As with spoken languages, affinities can be found with pronouns and proforms in sign languages as well. In particular, in many sign languages, the singular non-first person personal pronoun (a pointing sign) is phonologically identical to many proforms (e.g. demonstratives and relative pronouns). Additionally, it is also possible for pointing signs to have other non-pronominal functions, such as determiners and adverbials (Edge/Herrmann 1977; Zimmer/Patschke 1990). Thus one characteristic that pointing signs tend to share within and across sign languages is a general deictic, not just pronominal, function. This chapter begins with personal pronouns then moves on to proforms such as indefinites, demonstratives, interrogative pronouns, and relative pronouns. Examples in this chapter (which include productions of fluent native and non-native British Sign Language (BSL) signers from elicited narrative descriptions of cartoons/animations) will focus largely on two sign languages for which pronouns have been fairly well described: BSL and ASL. Data from some other sign languages is included where information from the literature is available.
2. Personal pronouns Personal pronouns in sign languages generally take the form of pointing signs, which are then directed towards present referents or locations in the signing space associated
11. Pronouns
Fig. 11.1: index3a ‘she’
229
Fig. 11.2: index2 ‘you’
Fig. 11.3: index1 ‘me’
with absent referents, as shown in Figures 11.1 and 11.2, or towards the signer him/ herself, as in Figure 11.3. First person pronouns in sign languages are directed inwards, usually towards the signer’s chest. However, there are exceptions to this, e.g. first person pronouns in Japanese Sign Language (NS) and Plains Indian Sign Language can be directed towards the signer’s nose (Farnell 1995; McBurney 2002). In general in most sign languages, the space around the signer is used for establishment and maintenance of pronominal (as well as other types of) reference throughout a discourse. However, there is evidence that the use of the signing space for pronominal reference may not be universal amongst sign languages. Marsaja (2008) notes that Kata Kolok, a village sign language used in Bali, Indonesia, prefers use of pointing to fingers on the non-dominant hand ⫺ i.e. ‘list buoys’ (Liddell 2003) ⫺ rather than to locations in space for reference. Also, Cambodian Sign Language appears to prefer full noun phrases over pronouns, an influence from politeness strategies in Khmer (Schembri, personal communication). In addition to pronouns, other means of establishing and maintaining spatial loci in a discourse include agreement/indicating verbs (see chapter 7 on verb agreement) and in some sign languages, agreement auxiliaries (see chapter 10 on agreement auxiliaries). Both of these devices have been considered to be grammaticised forms of pronominalisation or spatial loci (Pfau/Steinbach 2006). If the referent is present, the signer uses a pronoun or other agreement/indicating device to point to the location of the referent. If the referent is not present, the signer may establish a point in space for the referent, which could be motivated in some way (e.g. pointing towards a chair where a person usually sits) or could be arbitrary. Once a location in space for a referent has been established, that same location can be referred to again and again unambiguously with any of these devices, as in an example from BSL in (1) below, until they are actively changed. For more on the use of signing space in sign languages, see chapter 19. (3)
what. index3a lose bag. [BSL] sister index3a upset. index1 1ask3a sister there upset. I I-ask-her what She lost bag. ‘My sister was upset. I asked her what was wrong. She had lost her bag.’
2.1. Person The issue of person in sign languages is controversial. Traditionally sign language researchers assumed the spatial modification of personal pronouns to be part of a three-
230
II. Morphology person system analogous to those found in spoken languages (Friedman 1975; Klima/ Bellugi 1979; Padden 1983). According to these analyses, pronouns which point to the signer are first person forms, those which point to the addressee(s) are second person forms, and those which point to non-addressed participant(s) are third person forms. A three-person system for sign languages could be considered problematic, however, because there is no listable set of location values in the signing space to which a nonfirst person pronoun may point, for addressee or non-addressed participants. To address this issue, some researchers such as Lillo-Martin and Klima (1990) and McBurney (2002) proposed that sign languages like ASL have no person distinctions at all. Liddell (2003) has taken this idea a step further by claiming that sign language pronouns simply point to their referents gesturally. For Liddell, sign language pronouns are the result of a fusion of linguistic elements (phonologically specified parameters such as handshape and movement) and gestural elements (specifically the directionality of these signs). However, a gestural account of directionality alone does not explain first person behaviours, particularly with first person plurals, which do not necessarily point to their referents. This is part of the basis for Meier’s (1990) argument for a distinct first person category in ASL. Meier (1990) has argued for a two-person system for ASL ⫺ specifically, first person vs. non-first person. Meier claims that the use of space to refer to addressee and nonaddressed participants is fully gradient rather than categorical, i.e. that loci towards which these pronouns point are not listable morphemes, similarly to Lillo-Martin and Klima (1990), McBurney (2002), and Liddell (2003). But the situation with first person pronouns, Meier argues, is different. There is a single location associated with first person (in BSL and ASL, the centre of the signer’s chest). Furthermore, this location is not restricted to purely indexic reference, i.e. a point to the first person locus does not necessarily only refer to the signer. First person plurals in BSL and ASL, as shown in Figures 11.4 and 11.5, point primarily towards the chest area although they necessarily include referents other than just the signer. Furthermore, during constructed dialogue (a discourse strategy used for direct quotation ⫺ see Earis (2008) and chapter 17 on utterance reports and constructed action), a point toward the first person locus refers to the person whose role the signer is assuming, not the signer him/herself. Similarly, Nilsson (2004) found that in Swedish Sign Language, a point to the chest can be used to refer to the referent not only in representation of utterances but also of thoughts and actions. It is unclear whether or to what extent these patterns differ from gestural uses of pointing to the self in non-signers.
Fig. 11.4: BSL we
Fig. 11.5: ASL we
11. Pronouns Meier’s (1990) analysis recognises the ‘listability problem’ (Rathmann/Mathur 2002; see also chapter 7 on verb agreement) of multiple second/third person location values while at the same time recognising the special status of first person, for which there is only one specified location within a given sign language (e.g. the signer’s chest). The first person locus is so stable that it can carry first person information virtually alone, i.e. even when the F-handshape is lost through phonological processes. Studies on handshape variation in ASL (Lucas/Bayley 2005) and BSL (Schembri/Fenlon/Rentelis 2009) have found that the F-handshape is used significantly less often (e.g. due to assimilation) with first person pronouns than with non-first person pronouns. Other evidence for a distinct grammatical category for first person comes from first person plural forms. Non-first person pronouns point to the location(s) of each of their referent(s), while first person plurals generally only point, if anywhere, to the location of the signer (Cormier 2005, 2007; Meier 1990). Two-person systems have been assumed by other researchers for ASL and other sign languages (e.g., Emmorey 2002; EngbergPedersen 1993; Farris 1998; Lillo-Martin 2002; Padden 1990; Rathmann/Mathur 2002; Todd 2009), including Liddell (2003) who presumably sees a two-person (first vs. nonfirst) system as compatible with the notion that non-first person pronouns point to their referents gesturally. However, not all researchers subscribe to a two-person system. Berenz (2002) and Alibasic Ciciliani and Wilbur (2006) support the notion of a three-person system for Brazilian Sign Language (LSB) and Croatian Sign Language (HZJ), respectively, as well as ASL. They argue that, while the spatial locations to which addressee-directed and nonaddressee-directed pronouns are directed may be exactly the same, there are other cues that do reliably distinguish second from third person. These cues include the relationship between the direction of the signer’s eye gaze and the orientation of the head, chest, and hand. For second person reference, these four articulators typically align (assuming the signer and addressee are directly facing each other); for third person reference, the direction in which the hand points is misaligned with the other three articulators. Based on their analyses of LSB and HZJ, Berenz (2002) and Alibasic Ciciliani and Wilbur (2006) argue for a three-person system for these sign languages, and for ASL, based on a systematic distinction between reference to second versus third persons. However, in an eye-tracking study Thompson (2006) found no systematic difference in eye gaze between reference to addressees and reference to non-addressed participants in ASL. Even if eye gaze behaviours are more systematic in LSB and HZJ than in ASL, it is not clear what would make this distinction grammatical, as similar patterns of alignment and misalignment of eye gaze, torso orientation, and pointing are found in hearing non-signers when they gesture (Kita 2003). More research on pronominal systems of other sign languages and deictic gestures as used by non-signers, particularly reference in plural contexts, would help further clarify the role of person in sign languages (Johnston, in press).
2.2. Number Number marking on pronouns is somewhat more straightforward than person. Sign languages generally distinguish singular, dual, and plural forms. Singular and dual pronouns index (point to) their referent(s) more or less directly, singular pronouns with a simple
231
232
II. Morphology
Fig. 11.6: BSL two-of-us
Fig. 11.7: BSL they
Fig. 11.8: BSL they-comp
point to a location and dual forms with b-handshape (or some variant with the index and ring finger extended) which oscillates back and forth between the two locations being indexed (see Figure 11.6 for first person plural dual pronoun two-of-us in BSL). Many sign languages additionally have so-called ‘number-incorporated pronouns’. BSL and ASL have pronouns which incorporate numerals and indicate three, four and (for some signers in BSL) five referents (McBurney 2002; Sutton-Spence/Woll 1999). For ASL, some signers accept up to nine. This limit appears to be due to phonological constraints; most versions of the numbers 10 and above in ASL include a particular phonological movement which blocks number incorporation (McBurney 2002). Plural pronouns and number-incorporated pronouns index their referents more generally than singular or dual forms (Cormier 2007). Plural forms usually take the form of a F-handshape with a sweeping movement across the locations associated with the referents (as shown in Figure 11.7 they below) or with a distributed pointing motion towards multiple locations (see Figure 11.8 above for they-comp, a non-first person composite plural form). These forms have been identified in various sign languages (McBurney 2002; Zeshan 2000). Number-incorporated pronouns typically have a handshape of the numeral within that sign language and a small circular movement in the general location associated with the group of referents. Number-incorporated plurals have been identified in many sign languages, although some (such as Indopakistani Sign Language, IPSL) appear not to have them (McBurney 2002). McBurney (2002) argues that ASL grammatically marks number for dual but not in the number-incorporated pronouns. She points out that number marking for dual is obligatory while the use of number-incorporation appears to be an optional alternative to plural marking. For more on number and plural marking in sign languages, see chapter 6.
11. Pronouns
2.3. Exclusive pronouns Further evidence for a distinction between singulars/duals which index their referents directly and plurals/number-incorporated forms which index their referents less (or not at all) comes from exclusive pronouns in BSL and ASL (Cormier 2005, 2007). These studies aimed to investigate whether BSL and ASL have an inclusive/exclusive distinction in the first person plural, similar to the inclusive/exclusive distinction common in many spoken languages (particularly indigenous languages of the Americas, Australia and Oceania, cf. Nichols 1992), whereby first person plurals can either include the addressee (‘inclusive’) or exclude the addressee (‘exclusive’). In languages which lack an inclusive/ exclusive distinction, first person plurals are neutral with regard to whether or not the addressee is included (e.g. ‘we/us’ in English). Both BSL and ASL were found to have first person plurals (specifically plurals and number-incorporated pronouns) that are neutral with respect to clusivity, just as English. These forms are produced at the centre of the signer’s chest, as shown above in Figures 11.4 and 11.5. However, these forms can be made exclusive by changing the location of the pronoun from the centre of the signer’s chest to the signer’s left or right side. These exclusive forms are different from exclusive pronouns in spoken languages because they may exclude any referent salient in the discourse, not only the addressee. Wilbur and Patchke (1998) and Alibasic Ciciliani and Wilbur (2006) discuss what they refer to as ‘inclusive’ and ‘exclusive’ pronouns in ASL and HZJ. However, based on the descriptions, these forms seem to actually be first person and non-first person plurals, respectively ⫺ i.e. inclusive/exclusive of the signer ⫺ rather than inclusive/exclusive of the addressee or other salient referent as in spoken languages and as identified in BSL and ASL (Cormier 2005, 2007).
2.4. Possessive pronouns Possessive pronouns in sign languages described to date are directional in the same way that non-possessive personal pronouns are. They usually have a handshape distinct from the pointing F-handshape used in other personal pronouns ⫺ e.g. a u-handshape with palm directed toward the referent in sign languages such as ASL, HZJ, and Austrian Sign Language (ÖGS), Finnish Sign Language (FinSL), Danish Sign Language (DSL), and Hong Kong Sign Language (HKSL) (Alibasic Ciciliani/Wilbur 2006; Pichler et al. 2008; Tang/Sze 2002), and a 4-handshape in the British, Australian, and New Zealand Sign Language family (BANZSL) (Cormier/Fenlon 2009; Sutton-Spence/Woll 1999). Although BSL does use the 4-handshape in most cases, the F-handshape may also be used for inalienable possession (Cormier/Fenlon 2009; Sutton-Spence/Woll 1999). In HKSL, the u-handshape for possession is restricted to predicative possession. Nominal possession (with or without overt possessor) is expressed via a F-handshape instead (Tang/Sze 2002). Possessive pronouns, in BSL and ASL at least, are marked for person and number in the same way that non-possessive personal pronouns are (Cormier/Fenlon 2009).
233
234
II. Morphology
2.5. Gender and case It is not common for sign language pronouns to be marked for gender, but examples have been described in the literature. Fischer (1996) and Smith (1990) note gender marking for pronouns and on classifier constructions in NS and Taiwan Sign Language (TSL). They claim that pronouns and some classifiers are marked for masculine and feminine via a change in handshape. However, there are some questions about to what degree gender marking is obligatory (or even to what degree it occurs with pronouns at all) within the pronominal systems of these languages; McBurney (2002) suggests that this marking may be a productive (optional) morphological process in the pronominal systems of these languages rather than obligatory grammatical gender marking. Case marking on nouns or pronouns in sign languages is also not very common. Grammatical relations between arguments tend to be marked either by the verb, by word order, or are not marked and only recoverable via pragmatic context. However, Meir (2003) describes the emergence of a case-marked pronoun in Israeli Sign Language (Israeli SL). This pronoun, she argues, has been grammaticised from the noun person and currently functions as an object-marked pronoun. This pronoun exists alongside the more typical pointing sign used as a pronoun unmarked for case and is used in a variety of grammatical relations (subject, object, etc.), just as in other sign languages.
3. Proforms Somewhat confusingly, the term ‘proform’ or ‘pro-form’ has been used to refer to a variety of different features and constructions in sign languages, including the location to which a personal pronoun or other directional sign points (Edge/Herrmann 1977; Friedman 1975); the (personal) pronominal pointing sign itself (Hoffmeister 1978); a pointing sign distinct from a personal pronoun, usually made with the non-dominant hand, which is used to express spatial information (Engberg-Pedersen 1993); an alternative label for handshapes in classifier constructions (Engberg-Pedersen/Pedersen 1985); and finally as a superordinate term to cover both personal pronouns and classifier constructions which refer to or stand for something previously identified (Chang/Su/Tai 2005; Sutton-Spence/ Woll 1999). As noted above, following Bhat (2004), the term proform is used here to refer to pronouns other than personal pronouns, including reflexive pronouns, relative pronouns, reciprocal pronouns, indefinites, interrogatives, and demonstratives.
3.1. Reflexive and emphatic pronouns There is a class of sign language proforms that has been labelled as reflexive and is often glossed in its singular form as self. This pronoun can be marked for person (first and nonfirst) and number (singular and plural) in BSL and ASL and is directional in the same way that other personal pronouns are, as shown in Figures 11.9 and 11.10. These pronouns function primarily as emphatic pronouns in ASL (Lee et al. 1997; Liddell 2003), and seem to function the same way in BSL. Examples from BSL and ASL (Padden 1983, 134) are given in (2) and (3).
11. Pronouns
Fig. 11.9: BSL self3a
235
Fig. 11.10: ASL self3a
(2)
gromit3a play poss3a toy drill. drill++. stuck. self3a spin-around ‘Gromit was playing with a toy drill. He was drilling. The drill got stuck, and he himself spun around.’
[BSL]
(3)
sister iself telephone c-o ‘My sister will call the company herself.’
[ASL]
3.2. Indefinite pronouns Indefinite pronouns in some spoken languages appear to have been grammaticalised from generic nouns such as ‘person’ or ‘thing’, and/or from the numeral ‘one’ (Haspelmath 1997). This pattern is also found in some sign languages. The indefinite animate pronoun someone in BSL has the same handshape and orientation as the BSL numeral one and the BSL classifier for person or animate entity, with an additional slight tremoring movement, as in Figure 11.11 and in (4) below. (The sign someone is also identical in form with the interrogative pronoun who, as noted in section 3.4 below). Inanimate indefinites in BSL may be the same as the sign some as in Figure 11.12 and in (5), or the sign thing (Brien 1992).
Fig. 11.11: BSL someone/ ASL something/one
Fig. 11.12: BSL something(=some)
236
II. Morphology (4)
road bicycle someone cl:sit-on-bicycle nothing. ‘On a road there is a bicycle with nobody sitting on it.’
(5)
something(=some) road, something(=some) low ‘There is something on the road, something low down close to the road.’
[BSL]
Neidle et al. (2000) describe the ASL indefinite pronoun something/one, which is the same as the indefinite animate pronoun in BSL, as in Figure 11.11 above and in (6). As in BSL, the ASL indefinite pronoun shares the same handshape and orientation as the ASL numeral one and the ASL classifier for person or animate entity (Neidle et al. 2000, 91). (6)
something/one arrive ‘Someone/something arrived.’
[ASL]
Pfau and Steinbach (2006) describe the indefinite pronoun in German Sign Language (DGS) and Sign Language of the Netherlands (NGT) as a grammaticised combination of the numeral one and sign person, as in (7) and (8). Pfau and Steinbach point out that what distinguishes this indefinite form from the phrase one person ‘one person’ is that the indefinite does not necessarily refer to only one person. Therefore it could be one or more people that is seen in (7), or one or more people who are expected to do the dishes in (8) (Pfau/Steinbach 2006, 31). (7)
index1 one^person see ‘I’ve seen someone.’
[DGS]
(8)
one^person wash-dish do must ‘Someone has to wash the dishes.’
[NGT]
3.3. Reciprocal pronouns Pronouns expressing reciprocal meaning in spoken languages have an interesting relationship with reflexives and indefinites. Bhat (2004) notes that reciprocal meanings (such as ‘each other’ in English) tend to be expressed in spoken languages by indefinite expressions or the numeral ‘one’ (which used in a pronominal context would also have indefinite characteristics). English for example does not derive reciprocals from personal pronouns but instead from indefinite expressions such as ‘each’, ‘other’, ‘one’, and ‘another’, as in (9) below. Such affinities between reciprocals and indefinites are common amongst spoken languages. Reflexives, on the other hand, are inherently anaphoric and definite and are therefore semantically quite different from reciprocals (Bhat 2004). Thus we might expect to see more affinities between reciprocals and indefinites than between reciprocals and reflexives. (9)
a. The children are helping each other. b. The girls looked at one another.
However, reciprocal pronouns in BSL and ASL seem to be more closely related to reflexives than to indefinites. The reciprocal and reflexive pronouns in BSL and ASL
11. Pronouns
Fig. 11.13: BSL each-other
237
Fig. 11.14: ASL each-other
share more formational features than the reciprocal and indefinite pronouns. Thus for BSL, Figure 11.13 each-other is more similar to Figure 11.9 self than it is to Figures 11.11 someone or 11.12 something. For ASL, Figure 11.14 each-other is (much) more similar to Figure 11.10 self than to Figure 11.11 something/one. It is interesting that reciprocals seem to align themselves more with indefinites in spoken languages but with reflexives in BSL and ASL; however, the reason for this apparent difference is unclear. We do not know enough about reciprocal forms in other sign languages to know whether or to what extent this affinity between reciprocals and reflexives holds or varies across sign languages. Reciprocal pronouns are not the only way of expressing reciprocal relationships in sign languages. Agreement verbs in several sign languages allow reciprocal marking directly (Fischer/Gough 1980; Klima/Bellugi 1979; Pfau/Steinbach 2003). Pfau and Steinbach (2003) claim that DGS does not have reciprocal pronouns at all but expresses reciprocity in other ways, including via reciprocal marking on agreement verbs or on person agreement markers. It may be that sign languages that have person agreement markers (see chapter 10) such as DGS have less need for a reciprocal pronoun than sign languages which do not have person agreement markers such as ASL and BSL.
3.4. Interrogative pronouns Most sign languages have some pronouns which have an interrogative function, e.g. signs meaning ‘what’ or ‘who’. However, the number of interrogative pronouns across sign languages and the extent to which they differ from non-interrogative signs within each language varies greatly. For example sign languages such as ASL and BSL have at least one interrogative pronoun for each of the following concepts: ‘who’, ‘what’, ‘when’, ‘where’, ‘how’ and ‘why’. IPSL, on the other hand, has only one general interrogative sign (Zeshan 2004). The syntactic use of interrogatives and wh-questions in sign languages is covered in detail in chapter 14 on sentence types. One issue regarding interrogatives that is relevant for this chapter on pronouns is the relationship between interrogatives and indefinites. Zeshan (2004) notes that the same signs which are used for interrogatives in many sign languages have other noninterrogative functions as well, especially as indefinites. Specifically, NS, FinSL, LSB, and BANZSL all have interrogatives signs which are also used for indefinites. For
238
II. Morphology instance, in BSL, the same sign shown above in Figure 11.11 is used to mean both ‘someone’ and ‘who’. This is consistent with Bhat’s (2004) observation for spoken languages that interrogatives and indefinites are strongly linked. If this affinity between interrogatives and indefinites holds for other sign languages, this would provide evidence that the link between interrogatives and indefinites is modality independent. More research is needed to determine whether this is the case.
3.5. Demonstrative pronouns Demonstrative pronouns in spoken languages often distinguish between spatial location, e.g. proximate/remote, or proximate/medial/remote. English for instance makes only a two-way distinction (‘this’ vs. ‘that’). Sign language personal pronouns certainly can express spatial distinctions, both for animate referents (where the pointing sign would best be interpreted as ‘he’, ‘she’, ‘you’, ‘they’, etc.) and inanimate referents (where the pointing sign would best be interpreted as ‘it’, ‘this’, ‘that’, etc.). However, they do so gradiently and do not appear to have distinct categorical markings for notions such as proximate or remote. Many sign languages have been noted as having such an affinity between personal pronouns and demonstratives, including DGS (Pfau/ Steinbach 2005) and Italian Sign Language (LIS) (Branchini 2006). Although it is very common for demonstrative pronouns in sign languages to be phonologically identical to personal pronouns, ASL at least has a distinct demonstrative pronoun that (Liddell 1980), as shown in Figure 11.15. (Liddell (1980) actually describes four variants of the sign shown in Figure 11.15 which differ slightly in form and function. The version in Figure 11.15 can be used either as a demonstrative or as a relative pronoun; see also section 3.6, below).
Fig. 11.15: ASL that
3.6. Relative pronouns Relative clauses have been identified in many sign languages, including ASL (Coulter 1983; Liddell 1980), LIS (Branchini 2006; Cecchetto/Geraci/Zucchi 2006), and DGS (Pfau/Steinbach 2005) ⫺ see also chapter 16 for a detailed discussion of relative clauses. Relative clauses are relevant to this chapter in that they often include relative pronouns.
11. Pronouns
239
ASL uses a sign glossed as that as a relative pronoun (Coulter 1983; Fischer 1990; Liddell 1980; Petronio 1993), as in (10), cf. Liddell (1980, 148). Pfau and Steinbach (2005) note that DGS has two different relative pronouns, one for human referents as in (11) and Figure 11.16a and one for non-human referents as in (12) and Figure 11.16b, cf. Pfau and Steinbach (2005, 512). A sign similar to the DGS non-human relative pronoun has been noted for LIS (Branchini 2006; Cecchetto/Geraci/Zucchi 2006). Other sign languages such as LSB and BSL do not appear to have manual relative pronouns or complementisers at all but instead use word order and prosodic cues such as non-manual features (Nunes/de Quadros 2004, cited in Pfau/Steinbach 2005).
(10)
rc [[recently dog thata chase cat]S1 ]NP come home ‘The dog which recently chased the cat came home.’
[ASL]
(11)
re [man (ix3) [ rpro-h3 cat stroke]CP ]DP ‘the man who is stroking the cat’
[DGS]
(12)
re [book [ rpro-nh3 poss1 father read]CP ]DP ‘the book which my father is reading’
Fig. 11.16a: DGS RPRO-H
Fig. 11.16b: DGS RPRO-NH
Bhat (2004) notes a common affinity between relative pronouns and demonstratives in many spoken languages, including English. This also appears to hold for some sign languages as well. ASL that (as shown above in Figure 11.15) is used both as a demonstrative and as a relative pronoun (Liddell 1980). Pfau and Steinbach (2005) note that the DGS relative pronoun used for non-human referents (shown in Figure 11.16b) is identical in form to the DGS personal and demonstrative pronoun, which is also identical to the BSL personal pronoun as shown in Figure 11.1. The LIS relative pronoun is not identical to the LIS personal/demonstrative pronoun, although it does share the same F-handshape (Branchini 2006; Cecchetto/Geraci/Zucchi 2006)
4. Conclusion Like spoken languages, sign languages have many different types of pronoun, including personal pronouns as well as indefinites, reciprocals, interrogatives, demonstratives,
240
II. Morphology and relative pronouns. Affinities between different types of pronouns (including both personal pronouns and proforms) seem to be similar to those found within and across spoken languages. A major modality effect when it comes to personal pronouns is due to the use of the signing space for reference, leading to controversies surrounding person systems and person agreement in sign languages. Acknowledgements: Thanks to Clifton Langdon-Grigg, Jordan Fenlon, Sandra Smith, Pascale Maroney, and Claire Moore-Kibbey for acting as models for the example signs in this chapter. Thanks to Inge Zwitserlood, Adam Schembri, Jordan Fenlon, and Helen Earis for comments on earlier drafts of this chapter. Thanks also to Gabriel Arellano for advice on some ASL examples. This work was supported by the Economic and Social Research Council of Great Britain (Grant RES-620-28-6001), Deafness, Cognition and Language Research Centre (DCAL).
5. Literature Alibasic Ciciliani, Tamara/Wilbur, Ronnie B. 2006 Pronominal System in Croatian Sign Language. In: Sign Language & Linguistics 9 (1/2), 95⫺132. Berenz, Norine 2002 Insights into Person Deixis. In: Sign Language & Linguistics 5(2), 203⫺227. Bhat, D. N. S. 2004 Pronouns. Oxford: Oxford University Press. Branchini, Chiara 2006 On Relativization and Clefting in Italian Sign Language (LIS). PhD Dissertation, University of Urbino. Brien, David (ed.) 1992 Dictionary of British Sign Language/English. Boston: Faber & Faber. Cecchetto, Carlo/Geraci, Carlo/Zucchi, Sandro 2006 Strategies of Relativization in Italian Sign Language. In: Natural Language and Linguistic Theory 24, 945⫺975. Chang, Jung-hsing/Su, Shiou-fen/Tai, James H-Y. 2005 Classifier Predicates Reanalyzed, with Special Reference to Taiwan Sign Language. In: Language & Linguistics 6(2), 247⫺278. Cormier, Kearsy 2005 Exclusive Pronouns in American Sign Language. In: Filimonova, Elena (ed.), Clusivity: Typology and Case Studies of Inclusive-Exclusive Distinction, Amsterdam: Benjamins, 241⫺268. Cormier, Kearsy 2007 Do All Pronouns Point? Indexicality of First Person Plural Pronouns in BSL and ASL. In: Perniss, Pamela M./Pfau, Roland/Steinbach, Markus (eds.), Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 63⫺101. Cormier, Kearsy/Fenlon, Jordan 2009 Possession in the Visual-Gestural Modality: How Possession Is Expressed in British Sign Language. In: McGregor, William (ed.), The Expression of Possession. Berlin: Mouton de Gruyter, 389⫺422. Coulter, Geoffrey R. 1983 A Conjoined Analysis of American Sign Language Relative Clauses. In: Discourse Processes 6, 305⫺318.
11. Pronouns Earis, Helen 2008 Point of View in Narrative Discourse: A Comparison of British Sign Language and Spoken English. PhD Dissertation, University College London. Edge, VickiLee/Herrmann, Leora 1977 Verbs and the Determination of Subject in American Sign Language. In: Friedman, Lynn (ed.), On the Other Hand: New Perspectives on American Sign Language. New York: Academic Press, 137⫺179. Emmorey, Karen 2002 Language, Cognition, and the Brain: Insights from Sign Language Research. Mahwah, NJ: Lawrence Erlbaum Associates. Engberg-Pedersen, Elisabeth 1993 Space in Danish Sign Language. Hamburg: Signum. Engberg-Pedersen, Elisabeth/Pedersen, Annegrethe 1985 Proforms in Danish Sign Language, Their Use in Figurative Signing. In: Stokoe, William/Volterra, Virginia (eds.), Proceedings of the Third International Symposium on Sign Language Research. Silver Spring, MD: Linstock Press, 202⫺209. Farnell, Brenda 1995 Do You See What I Mean? Plains Indian Sign Talk and the Embodiment of Action. Austin: University of Texas Press. Farris, Michael A. 1998 Models of Person in Sign Languages. In: Lingua Posnaniensis 40, 47⫺59. Fischer, Susan D. 1990 The Head Parameter in ASL. In: Edmondson, William H./Karlsson, Fred (ed.), SLR ’87 Papers from the Fourth International Symposium on Sign Language Research. Hamburg: Signum, 75⫺85. Fischer, Susan D. 1996 The Role of Agreement and Auxiliaries in Sign Language. In: Lingua 98, 103⫺119. Fischer, Susan D./Gough, Bonnie 1980 Verbs in American Sign Language. In: Stokoe, William (ed.), Sign and Culture: A Reader for Students of American Sign Language. Silver Spring, MD: Linstok Press, 149⫺179. Friedman, Lynn 1975 Space and Time Reference in American Sign Language. In: Language 51(4), 940⫺961. Haspelmath, Martin 1997 Indefinite Pronouns. Oxford: Oxford University Press. Hoffmeister, Robert 1978 The Development of Demonstrative Pronouns, Locatives and Personal Pronouns in the Acquisition of ASL by Deaf Children of Deaf Parents. PhD Dissertation, University of Minnesota. Johnston, Trevor in press Functional and Formational Characteristics of Pointing Signs in a Corpus of Auslan (Australian Sign Language). To appear in: Corpus Linguistics and Linguistic Theory. Kita, Sotaro 2003 Interplay of Gaze, Hand, Torso Orientation, and Language in Pointing. In: Kita, Sotaro (ed.), Pointing: Where Language, Culture and Cognition Meet. Mahwah, NJ: Lawrence Erlbaum Associates, 307⫺328. Klima, Edward/Bellugi, Ursula 1979 The Signs of Language. Cambridge, MA: Harvard University Press. Lee, Robert G./Neidle, Carol/MacLaughlin, Dawn/Bahan, Ben/Kegl, Judy 1997 Role Shift in ASL: A Syntactic Look at Direct Speech. In: Neidle, Carol/MacLaughlin, Dawn/Lee, Robert G. (eds.), Syntactic Structure and Discourse Function: An Examination of Two Constructions in American Sign Language. Boston, MA: American Sign Language Linguistic Research Project, Boston University, 24⫺45.
241
242
II. Morphology Liddell, Scott K. 1980 American Sign Language Syntax. The Hague: Mouton de Gruyter. Liddell, Scott K. 2003 Grammar, Gesture and Meaning in American Sign Language. Cambridge: Cambridge University Press. Lillo-Martin, Diane 2002 Where Are All the Modality Effects? In: Meier, Richard P./Cormier, Kearsy/QuintoPozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 241⫺262. Lillo-Martin, Diane/Klima, Edward 1990 Pointing out Differences: ASL Pronouns in Syntactic Theory. In: Fischer, Susan D./ Siple, Patricia (eds.), Theoretical Issues in Sign Language Research, Vol. 1: Linguistics. Chicago: University of Chicago Press, 191⫺210. Lucas, Ceil/Bayley, Robert 2005 Variation in ASL: The Role of Grammatical Function. In: Sign Language Studies 6(1), 38⫺75. Marsaja, I. Gede 2008 Desa Kolok ⫺ A Deaf Village and Its Sign Language in Bali, Indonesia. Nijmegen: Ishara Press. McBurney, Susan L. 2002 Pronominal Reference in Signed and Spoken Language: Are Grammatical Categories Modality-Dependent? In: Meier, Richard P./Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 329⫺369. Meier, Richard P. 1990 Person Deixis in ASL. In: Fischer, Susan D./Siple, Patricia (eds.), Theoretical Issues in Sign Language Research, Vol. 1: Linguistics. Chicago: University of Chicago Press, 175⫺190. Meir, Irit 2003 Grammaticalization and Modality: The Emergence of a Case-Marked Pronoun in Israeli Sign Language. In: Journal of Linguistics 39, 109⫺140. Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Ben/Lee, Robert 2000 The Syntax of American Sign Language. Cambridge, MA: MIT Press. Nichols, Johanna 1992 Linguistic Diversity in Space and Time. Chicago: University of Chicago Press. Nilsson, Anna-Lena 2004 Form and Discourse Function of the Pointing toward the Chest in Swedish Sign Language. In: Sign Language & Linguistics 7(1), 3⫺30. Nunes, Jairo/de Quadros, Ronice M. 2004 Phonetic Realization of Multiple Copies in Brazilian Sign Language. Paper Presented at the 8 th Conference on Theoretical Issues in Sign Language Research (TISLR), Barcelona. Padden, Carol A. 1983 Interaction of Morphology and Syntax in American Sign Language. PhD Dissertation, University of California at San Diego. Padden, Carol A. 1990 The Relation between Space and Grammar in ASL Verb Morphology. In: Lucas, Ceil (ed.), Sign Language Research: Theoretical Issues. Washington, D.C.: Gallaudet University Press, 118⫺132. Petronio, Karen 1993 Clause Structure in American Sign Language. PhD Dissertation, University of Washington.
11. Pronouns Pfau, Roland/Steinbach, Markus 2003 Optimal Reciprocals in German Sign Language. In: Sign Language and Linguistics 6 (1), 3⫺42. Pfau, Roland/Steinbach, Markus 2005 Relative Clauses in German Sign Language: Extraposition and Reconstruction. In: Bateman, Leah/Ussery, Cherlon (eds.), Proceedings of the North East Linguistic Society (NELS 35), Vol. 2. Amherst, MA: GLSA, 507⫺521. Pfau, Roland/Steinbach, Markus 2006 Modality-Independent and Modality-Specific Aspects of Grammaticalization in Sign Languages. In: Linguistics in Potsdam 24, 5⫺98. Pichler, Deborah Chen/Schalber, Katharina/Hochgesang, Julie/Milkovic, Marina/Wilbur, Ronnie B./Vulje, Martina/Pribanic´, Ljubica 2008 Possession and Existence in Three Sign Languages. In: Quadros, Ronice M. de (ed.), Sign Languages: Spinning and Unraveling the Past, Present and Future. TISLR 9, FortyFive Papers and Three Posters from the 9 th Theoretical Issues in Sign Language Research Conference. Petro´polis/RJ. Brazil: Editora Arara Azul, 440⫺458. Rathmann, Christian/Mathur, Gaurav 2002 Is Verb Agreement the Same Cross-Modally? In: Meier, Richard P./Cormier, Kearsy/ Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 370⫺404. Schembri, Adam/Fenlon, Jordan/Rentelis, Ramas 2009 British Sign Language Corpus Project: Sociolinguistic Variation in the 1 Handshape in BSL Conversations. Paper Presented at the 50 th Annual Meeting of the Linguistics Association of Great Britain, Edinburgh. Smith, Wayne H. 1990 Evidence for Auxiliaries in Taiwan Sign Language. In: Fischer, Susan D./Siple, Patricia (eds.), Theoretical Issues in Sign Language Research, Vol. 1: Linguistics. Chicago: University of Chicago Press, 211⫺228. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language. Cambridge: Cambridge University Press. Tang, Gladys/Sze, Felix 2002 Nominal Expressions in Hong Kong Sign Language: Does Modality Make a Difference? In: Meier, Richard P./Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 296⫺320. Thompson, Robin 2006 Eye Gaze in American Sign Language: Linguistic Functions for Verbs and Pronouns. PhD Dissertation, University of California, San Diego. Todd, Peyton 2009 Does ASL Really Have Just Two Grammatical Persons? In: Sign Language Studies 9(2), 166⫺210. Wilbur, Ronnie B./Patschke, Cynthia 1998 Body Leans and the Marking of Contrast in American Sign Language. In: Journal of Pragmatics 30, 275⫺303. Zeshan, Ulrike 2000 Sign Language in Indo-Pakistan: A Description of a Signed Language. Amsterdam: Benjamins. Zeshan, Ulrike 2004 Interrogative Constructions in Signed Languages: Crosslinguistic Perspectives. In: Language 80(1), 7⫺39.
243
244
II. Morphology Zimmer, June/Patschke, Cynthia 1990 A Class of Determiners. In: Lucas, Ceil/Valli, Clayton (eds.), Sign Language Research: Theoretical Issues. Washington, D.C.: Gallaudet University Press, 201⫺210.
Kearsy Cormier, London (United Kingdom)
III. Syntax 12. Word order 1. 2. 3. 4. 5. 6. 7.
Word order ⫺ some background issues Word order and sign languages A timeline for sign linguistic research: how does word order work fit in? Towards a typology of sign languages Methodological issues: how data type impacts results Conclusion Literature
Abstract This chapter explores issues relating to word order and sign languages. We begin by sketching an outline of the key issues involved in tackling word order matters, regardless of modality of language. These include the functional aspect of word order, the articulatory issues associated with simultaneity in sign languages, and the question of whether one can identify a basic word order. Though the term ‘constituent order’ is more accurate, we will for convenience continue to use the term ‘word order’ given its historical importance in the literature. We go on to discuss the relationship between signs and words before providing a historically-based survey of research on word order in sign languages. We follow Woll’s (2003) identification of three important phases of research: the first concentrating on similarities between sign and spoken languages; the second focussing on the modality of sign languages; and the third switching the emphasis to typological studies. We touch on the importance of such issues as non-manual features, simultaneity, and pragmatic processes like topicalisation. The theoretical stances of scholars cited include functional grammar, cognitive grammar, and generative grammar.
1. Word order − some background issues In our discussion of word order, we follow Bouchard and Dubuisson (1995), who identify three aspects important to word order: (i) a functional aspect, where the order of items provides information about the combination of words and which, in turn, provides guidance on how to interpret the sentence (section 1.1); (ii) an articulatory aspect which (for spoken languages) arises because generally, it is impossible to articulate more than one sound at a time (section 1.2); (iii) the presumption of the existence of a basic word order (section 1.3).
246
III. Syntax
1.1. The functional aspect Based on the identification of discrete constituents, which we discuss in the next section, cross-linguistic and typological research has identified a range of associations between specific orders in languages and particular functions. For example, ordering relations have been identified between a verb and its arguments, whether expressed as affixes or separate phrases, which identify the propositional structure of the clause. We may refer to a language that exhibits this behaviour as argument configurational. This may be achieved indirectly through a system of grammatical relations (subject, object, etc.) or directly via semantic roles (agent, patient, etc.). Greenberg’s (1966) well-known work on word order typology, which characterises languages as SVO, SOV, etc., assumes the ubiquity of this role of order in the determination of propositional meaning. However, scholars working on other spoken languages like Chinese (LaPolla 1995) or Hungarian (Kiss 2002) have argued that the primary role of order in these languages is to mark information structure distinctions such as focus and topic. Such languages have been termed discourse configurational (Kiss 1995). There have also been claims that some spoken languages have free word order, for example Warlpiri (Hale 1983) and Jingulu (Pensalfini 2003). These languages, which have been termed non-configurational, are said not to employ order for any discernible linguistic function. In the Generative Grammar literature, surface non-configurationality is often countered by positing more abstract hierarchical structure (see Sandler/Lillo-Martin (2006, 301⫺308) for this strategy applied to sign languages). These distinct ordering patterns of argument configurational, discourse configurational, and non-configurational have also been identified for sign languages. Following Liddell’s (1980) early descriptions of word order in American Sign Language (ASL), Valli, Lucas, and Mulrooney (2006) identify ASL as argument configurational, reflecting grammatical relations such as subject and object (also see Wilbur 1987, Neidle et al. 2000). Similar claims have been made for Italian Sign Language (LIS, Volterra et al. 1984), German Sign Language (DGS, Glück/Pfau 1998), and Brazilian Sign Language (LSB, de Quadros 1999). On the other hand, various scholars have argued for discourse configurational accounts, for example, Deuchar (1983) writing on British Sign Language (BSL), Engberg-Pedersen (1994) on Danish Sign Language (DSL), and Nadeau and Desouvrey (1994) on Quebec Sign Language (LSQ).
1.2. The articulatory aspect The articulatory aspect raises issues about chronological sequence and discreteness and links directly to the issue of modality. The fact that sign languages can express different aspects of information at the same time differentiates them from spoken languages (even when taking into account prosodic elements such as tone) in terms of the degree of simultaneity. Simultaneity can be encoded both non-manually and manually. As for the former type of simultaneity, there is a striking amount of similarity across described sign languages regarding non-manuals marking interrogatives (including wh-questions and yes/no-questions), negation, topic-comment structure, conditionals, etc. (see, for example, Vermeerbergen/Leeson/Crasborn 2007). Vermeerbergen and Leeson (2011)
12. Word order
247
note that the similarities documented to date go beyond functionality: form is also highly similar. Across unrelated sign languages, wh-questions, for example, are marked by a clustering of non-manual features (NMFs) of which furrowed brows are most salient, while for yes-no questions, raised brows are the most salient feature (see chapter 14, Sentence Types, for discussion). The fact that sign languages, due to the availability of two articulators (the two hands), also allow for manual simultaneity compounds the issue, and is one we return to again in section 2. Given all this, it is probably not surprising that the issue of word order, when assumed as a chronologically linear concept, is controversial for studies of sign languages. Indeed, it is from consideration of wh-questions that Bouchard and Dubuisson (1995) make their argument against the existence of a basic word order for sign languages (also see Bouchard (1997)). On this point, Perniss, Pfau, and Steinbach (2007) note that there seems to be a greater degree of variance across sign language interrogatives with respect to manual marking (e.g. question word paradigms, question particles, word order) than for non-manual marking, although differences in terms of the form and scope of non-manual marking are also attested.
1.3. Basic word order In many discussions of the mappings between orders and functions, there is a presumption that one order is more basic than others. In this view, the basic word order is then changed to communicate other functions. Such changed orders may be seen as ‘marked’ or ‘atypical’ in some way. More elaborate versions of this approach might identify a range of order-function pairings, within each of which there may occur marked or atypical orders. Generally, the criteria for identifying basic word order include the following (Brennan 1994, 19, also see Dryer 2007): (i) the order that is most frequent; (ii) the word order of simple, declarative, active clauses with no complex words or noun phrases; (iii) the word order that requires the simplest syntactic description; (iv) the order that is accompanied by the least morphological marking; (v) the order that is most neutral, i.e. that is the least pragmatically marked. Based on these criteria, some scholars have argued for the existence of a basic word order in certain sign languages. Basic SVO order has been identified in, for instance, ASL and LSB, while LIS and DGS have been argued to have basic SOV order. These two word order patterns are illustrated by the ASL and LIS examples in (1), taken from Liddell (1980, 19) and Cecchetto/Geraci/Zucchi (2009, 282), respectively. (1)
a. woman forget purse ‘The woman forgot the purse.’ b. gianni maria love ‘Gianni loves Maria.’
[ASL] [LIS]
However, as noted above, some scholars question the universality of basic word order. Bouchard and Dubuisson (1995), for example, argue that “only languages in which
248
III. Syntax word order has an important functional role will exhibit a basic order” (1995, 100). Their argument is that the modality of sign languages reduces the importance of order as “there are other means that a language can use to indicate what elements combine” (1995, 132). The notion of basic word order usually underlies the identification of functional type in that the type is usually based on a postulated basic word order, which then may undergo changes for pragmatic reasons or to serve other functions. Massone and Curiel (2004), for instance, identify Argentine Sign Language (LSA) as argument configurational (SOV) in its basic word order but describe pragmatic rules such as topicalisation that may alter this basic order (see section 3.2 for further discussion).
2. Word order and sign languages We might start by asking whether it is appropriate to term the ordering of constituents in a sign language as ‘word order’. Brennan (1994) concludes that while there are difficulties with terms that originate in the examination of spoken languages, the unit that is known as ‘the sign’ in sign languages clearly functions as the linguistic unit that we know as the word. We do not usually exploit a separate term for this unit in relation to written as opposed to spoken language, even though notions of written word and spoken word are not totally congruous. (Brennan 1994, 13)
Brennan thus uses the term ‘word’ in a general sense to incorporate spoken, sign, and written language. She uses the term ‘sign’ when referring only to sign languages, taking as given that ‘signs’ are equivalent to ‘words’ in terms of grammatical role. However, in the same volume, Coerts (1994a,b), investigating word order in Sign Language of the Netherlands (NGT), refers explicitly to constituent structure. She does not explicitly motivate her choice of terminology (a problem that impacts on attempts at later typological work; see, for example, Johnston et al. (2007)), but it seems that as she is concerned with the ordering of elements within a fixed set of parameters, the discussion of constituents seems more appropriate. Leaving this debate regarding terminology aside, we can say that the issue of identifying a basic constituent order(s) in a sign language is complex. However, given the fact that sign languages are expressed in another modality, one which makes use of three-dimensional space and can employ simultaneous production of signs using the major articulators (i.e. the arms and hands), we also encounter questions that are unique to research on sign languages. These include questions regarding the degree and extent of simultaneous patterning, the extent of iconicity at syntactic and lexical levels, and the applicability to sign languages of a dichotomy between languages whose constituent orders reflect syntactic functions and those whose orders reflect pragmatic functions (after Brennan 1994, 29 f.). The challenge posed by simultaneity is illustrated by the examples in (2). In the NGT example in (2a), we observe full simultaneity of the verb and the direct object (which is expressed by a classifier); it is therefore impossible to decide whether we are dealing with SVO or SOV order (Coerts 1994b, 78). The Jordanian Sign Language (LIU) example in (2b) is even more complex (Hendriks 2008, 142 f.; note that the
12. Word order signer is left-handed). The Figure (the subject ‘car’) and the Ground (the locative object ‘bridge’) are introduced simultaneously by classifiers. Subsequently, the classifier representing the car is first held in place, then moved with respect to the Ground, and then held in place again, taking on different grammatical roles in subsequent clauses. Clearly, it would be challenging, if not impossible, to determine word order in this example (see Miller (1994) and papers in Vermeerbergen/Leeson/Crasborn (2007) for discussion of different types of simultaneity; also see example (7) below), (2)
a. R: woman cut3b [NGT] L: cl(thread)3b ‘The woman cuts the thread.’ b. R: cl:vehicleforward hold backward-forward hold____________ [LIU] L: cl:bridge know cl:bridge stay what R: hold L: cl:vehiclemove forward repeatedly indexCL:vehicle ‘The car passed under the bridge, you get it? It passed under the bridge and stayed there. What (could he do)? That parked car was passed by other cars.’
In relation to the ordering of constituents within sign languages, Brennan notes that there is a reasonable body of evidence to indicate that sequential ordering of signs does express such relationships, at least some of the time, in all of the signed languages so far studied. However, we also know from the studies available that there are other possible ways. (Brennan 1994, 31)
Among these “other possible ways”, we can list the addition of specific morphemes to the form of the verb, which allows for the expression of the verb plus its arguments (see section 4.3). Brennan makes the point that we cannot talk about SVO or SOV or VSO ordering if the verb and its arguments are expressed simultaneously within the production of a single sign, as is, for example, the case in classifier constructions (see chapter 8 for discussion). This, and other issues related to the expression of simultaneity are taken up in Vermeerbergen, Leeson, and Crasborn (2007).
3. A timeline for sign linguistic research: how does word order work fit in? Woll (2003) has described research on sign languages as falling into three broad categories: (i) the modern period, (ii) the post-modern period, and (iii) typological research (see chapter 38 for the history of sign linguistic research). We suggest that work on word order can be mapped onto this categorization, bearing in mind that the categories suggested by Woll are not absolute. For example, while ASL research may have entered into the post-modern stage in the early 1980s, the fact that for many other underdescribed sign languages, the point of reference for comparative purposes has frequently been ASL or BSL implies that for these languages, some degree of crosslinguistic work has always been embedded in their approach to description. However,
249
250
III. Syntax the conscious move towards typological research, taking on board findings from the field of gesture research and awareness of the scope of simultaneity in word order, is very much a hallmark of early twenty-first century research. We address work that can be associated with the modern and post-modern period in the following two subsections and turn to typological research in section 4.
3.1. The modern period and word order research Early research tended to focus on the description of individual sign languages with reference to the literature on word order in spoken languages, and concentrated mostly on what made sign languages similar to spoken languages rather than on what differentiated them from each other. This links to Woll’s ‘modern period’. For example, early research focused on the linearity of expression in sign languages without reference to modality-related features of sign languages like manual simultaneity, iconicity, etc. (e.g., Fischer 1975; Liddell 1977). Fischer (1975), for instance, describes ASL as having an underlying SVO pattern at the clause level, but also notes that alternative orders exist (such as the use of topic constructions in certain instances, yielding OSV order). In contrast, Friedman (1976) claims that word order in ASL is relatively free with a general tendency for verbs to appear sentence-finally, also arguing that the subject is not present in the majority of her examples. However, Liddell (1977) and Wilbur (1987) questioned Friedman’s analysis, criticising that she did not recognise the ways in which ASL verbs inflect to mark agreement, a point of differentiation between spoken and sign languages, which we can align with Woll’s post-modern period of research.
3.2. The post-modern period and word order research In the post-modern period, which Woll pinpoints as having its beginnings in the 1980s, researchers began to look at the points of differentiation between sign and spoken languages, leading to work focussing on the impact of language modality on syntax. The beginnings of work on word order in BSL, for example, fall into this timeframe. As with the work on ASL reviewed above, Deuchar (1983) raises the question of whether BSL is an SVO language but argues that a more functional topic-comment analysis might more fully account for the data than one that limits itself to sign order per se. Deuchar drew on Li and Thompson’s (1976) work on the definition of topics, demonstrating links to the functionalist view on language. Her work also seeks to compare BSL with ASL, thus representing an early nod towards typological work for sign languages. For example, in exploring the function of topics in BSL, Deuchar did not find the slight backward head tilt being used as a marker of topicalisation in BSL which had been described by Liddell (1980) for ASL. However, she found that NMFs marked the separation of topic and comment in her data: topics were marked by raised eyebrows while the comments were marked by a headnod (a description which also differs slightly from that given by Baker-Shenk and Cokely (1980) for ASL). By the late 1980s and into the 1990s, work on ASL also began to make greater reference to topic marking and other points of differentiation such as simultaneity (e.g., Miller 1994). We note here that Miller also looked at LSQ, thus marking a move towards cross-linguistic, typological studies.
12. Word order
251
3.2.1. Functional and cognitive approaches to word order Work that addresses the word order issue from a functionalist-cognitive viewpoint argues that topic-comment structure reflects basic ordering in ASL (and probably other sign languages) and is pervasive across ASL discourse (e.g., Janzen 1998, 1999), noting, however, that this sense of pervasiveness is lost when topic-comment structure is considered as just one of several sentence types that arises. Janzen presents evidence from a range of historical and contemporary ASL monologues that suggests that topics grammaticalized from yes/no-question structure and argues that topics function as ‘pivots’ in the organisation of discourse. He suggests that topics in ASL arise in pragmatic, syntactic, and textual domains, but that in all cases, their prototypical characteristic is one of being ‘backward looking’ to a previous identifiable experience or portion of the text, or being ‘forward looking’, serving as the ground for a portion of discourse that follows. The examples in (3) illustrate different pragmatic discourse motivations for topicalisation (Janzen 1999, 276 f., glosses slightly adapted). In example (3a), the string I’ll see Bill is new information while the topicalised temporal phrase next-week situates the event within a temporal framework. In (3b), the object functions as topic because “the signer does not feel he can proceed with the proposition until it is clear that certain information has become activated for the addressee”.
(3)
top a. next week, future see b-i-l-l ‘I’ll see bill next week.’ top b. know b-i-l-l, future see next-week ‘I’ll see bill next week.’
[ASL]
Another, yet related, theoretical strand influencing work on aspects of constituent ordering is that of cognitive linguistics, which emphasises the relationship between cognitive processes and language use (Langacker 1991). Work in this genre has pushed forward new views on aspects of verbal valence such as detransitivisation and passive constructions (Janzen/O’Dea/Shaffer 2001; Leeson 2001). Cognitive linguistics accounts typically frame the discussion of word order in terms of issues of categorisation, prototype theory, the influence of gesture and iconicity with respect to the relationship between form and meaning, and particularly the idea of iconicity at the level of grammar. The identification of underlying principles of cognition evidenced by sign language structures is an important goal. Work in this domain is founded on that of authors such as Jackendoff (1990) and Fauconnier (1985, 1997) and has lead to a growing body of work by sign linguists working on a range of sign languages; for work on ASL, see, for instance, Liddell (2003), Armstrong and Wilcox (2007), Dudis (2004), S. Wilcox (2004), P. Wilcox (2000), Janzen (1999, 2005), Shaffer (2004), Taub (2001), and Taub and Galvan (2001); for Swedish Sign Language (SSL), see Bergman and Wallin (1985) and Nilsson (2010); for DGS, see Perniss (2007); for Icelandic Sign Language, see Thorvaldsdottir (2007); for Irish Sign Language (Irish SL), see Leeson and Saeed (2007, 2012) and Leeson (2001); for French Sign Language (LSF), see Cuxac (2000), Sallandre (2007), and Risler (2007); and for Israeli Sign Language (Israeli SL), see Meir (1998).
252
III. Syntax
3.2.2. Generative approaches to word order In contrast, other accounts lean towards generative views on language. Fischer (1990), for instance, observes that both head-first and head-final structures appear in ASL and notes a clear relationship between definiteness and topicalisation. She also notes inconsistency in terms of head-ordering within all types of phrases and attempts to account for this pattern in terms of topicalisation: heads usually precede their complements except where complements are definite ⫺ in such cases, a complement can precede the head. This leads Fischer to claim that ASL is like Japanese in structure insofar as ASL allows for multiple topics to occur. Similarly, Neidle et al. (2000) explore a wide range of clauses and noun phrases as used by ASL native signers within a generativist framework. They conclude (like Fischer and Liddell before them) that ASL has a basic hierarchical word order, which is SVO, basing their claims on the analysis of both naturalistic and elicited data. Working within a Minimalist Program perspective (Chomsky 1995), they state that derivations from this basic order can be explained in terms of movement operations, that is, they reflect derived orders. Neidle et al. make some very interesting descriptive claims for ASL: they argue that topics, tags, and pronominal right dislocations are not fundamental to the clause in ASL. They treat these constituents as being external to the clause (i.e. the Complementizer Phrase (CP) and argue that once such clause-external elements are identified, it becomes evident that the basic word order in ASL is SVO. For example, in (4a), the object has been moved from its post-verbal base position (indicated by ‘t’ for ‘trace’) to a sentence-initial topic position ⫺ the specifier of a Topic Phrase in their model ⫺ resulting in OSV order at the surface (Neidle et al. 2000, 50). Note that according to criterion (v) introduced in section 1.3 (pragmatic neutrality), example (4a) would probably not be considered basic either.
(4)
top a. johni, mary love ti ‘John, Mary loves.’ b. pro book buy ix3a ‘He buys a book.’
[ASL] [NGT]
Along similar lines, the OVS order observed in the NGT example in (4b) is taken to be the result of two syntactic mechanisms: pronominal right disclocation of the subject pronoun (pronoun copy) accompanied by pro-drop (Perniss/Pfau/Steinbach 2007, 15). Neidle et al. (2000) further argue that the distribution of syntactic non-manual markings (which spread over c-command domains) lends additional support for the existence of hierarchically organized constituents, thus further supporting their claim that the underlying word order of ASL is SVO. They conclude that previous claims that ASL utilised free word order are unfounded. Another issue of concern first raised in the post-modern period and now gaining more attention in the age of typological research is that of modality, with the similarities and differences between sign languages attracting increased attention. Amongst other things, this period led to work on simultaneity in all its guises (see examples in (2) above), and some questioning of how this phenomenon impacted on descriptions
12. Word order of basic word order (e.g., Brennan 1994; Miller 1994). Clearly, simultaneity is highly problematic for a framework that assumes that hierarchical structure is mapped onto linear order.
4. Towards a typology of sign languages In today’s climate, researchers are drawing on the results of work emanating from the modern and post-modern periods, consolidating knowledge and re-thinking theoretical assumptions with reference to cross-linguistic studies on aspects of syntax, semantics, and pragmatics (e.g., Perniss/Pfau/Steinbach 2007; Vermeerbergen/Leeson/Crasborn 2007). In the late twentieth century and early twenty-first century, work has tended to be cross-linguistic in nature, considering the modality effects as a point of differentiation between spoken and sign languages. Moreover, studies sought to identify points that differentiate between sign languages, while also acknowledging the impact that articulation in the visual-spatial modality seems to have for sign languages, which leads to a certain level of similarity in certain areas. This phase of research maps onto Woll’s third phase, that of ‘typological research’ and has led to a significant leap forward in terms of our understanding of the relationship between sign languages and the ways in which sign languages are structured.
4.1. Cross-linguistic comparison based on picture elicitation Early work which we might consider as mapping onto a typological framework, and which still has relevance today, involves the picture elicitation tasks first used by Volterra et al. (1984) for LIS (see chapter 42, Data Collection, for details). This study, which focused on eliciting data to reflect transitive utterances has since been replicated for many sign languages including work by Boyes Braem et al. (1990) for Swiss-German Sign Language, Coerts (1994a,b) for NGT, Saeed, Sutton-Spence, and Leeson (2000) for Irish SL and BSL, Leeson (2001) for Irish SL, Sze (2003) for Hong Kong Sign Language (HKSL), Kimmelman (2011) for Russian Sign Language (RSL), and, more recently, comparative work on Australian Sign Language (Auslan), Flemish Sign Language (VGT), and Irish SL (Johnston et al. 2007) as well as on VGT and South African Sign Language (Vermeerbergen et al. 2007). These studies attempt to employ the same framework in their analysis of comparative word order patterning across sign languages, using the same set of sentence/story elicitation tasks, meant to elicit the same range of orders and strategies in the languages examined. Three kinds of declarative utterances were explored in particular: non-reversible sentences (i.e. where only one referent can be the possible Actor/Agent in the utterance; e.g. The boy eats a piece of cake), reversible sentences (i.e. where both referents could act as the semantic Agent; e.g. The boy hugs his grandmother), and locative sentences (these presented the positions of two referents relative to one another; e.g. The cat sits on the chair). Unsurprisingly, results have been varied. For example, Italian subjects tended to mention the Agent first in their sentences while Swiss informants “tended to prefer to
253
254
III. Syntax set up what we have called a visual context with the utilisation of many typical sign language techniques such as spatial referencing, use of handshape proforms, role, etc.” (Boyes-Braem et al. 1990, 119). For many of the sign languages examined, it was found that reversibility of the situation could have an influence on word order in that reversible sentences favoured SVO order while SOV order was observed more often in nonreversible sentences; this appeared to be the case in, for instance, LIS (Volterra et al. 1984) and VGT (Vermeerbergen et al. 2007). In Auslan, Irish SL, and HKSL, however, reversibility was not found to influence word order (Johnston et al. 2007, Sze 2003). Moreover, results from many of these studies suggest that locative sentences favour a different word order, namely Ground ⫺ Figure ⫺ locative predicate, a pattern that is likely to be influenced by the visual modality of sign languages. A representative example from NGT is provided in (5) (Coerts 1994a, 65), but see (7) for an alternative structure. (5)
table ball cl‘ball under the table’ ‘The ball is under the table.’
[NGT]
Another study that made use of the same Volterra et al. elicitation materials is Vermeerbergen’s (1998) analysis of VGT. Using 14 subjects aged between 20 and 84 years, Vermeerbergen found that VGT exhibits systematic ordering of constituents in declarative utterances that contain two (reversible or non-reversible) arguments. What is notable in Vermeerbergen’s study is the clear definition of subject applied (work preceding Coerts (1994a,b) does not typically include definitions of terms used). Vermeerbergen interprets subject as a ‘psychological subject’, that is “the particular about whom/which knowledge is added will be called a subject”. Similarly, her references to object are based on a definition of object as “the constituent naming the referent affected by what is expressed by the verb (the action, condition)” (1998, 4). However, we should note that this is a ‘mixed’ pair of definitions: object is defined in terms of semantic role (Patient/Theme) while subject is given a pragmatic definition (something like topic). Vermeerbergen found that SVO ordering occurred most frequently in her elicited data, although older informants tended to avoid this patterning. Analysing spontaneous data, with the aim of examining whether SVO and SOV occurred as systematically outside of her elicited data corpus, she found that actually only a small number of clauses contained verbs accompanied by explicit referents, particularly in clauses where two interacting animate referents were expressed. She notes that Flemish signers seem to avoid combining one single verb and more than one of the interacting arguments. To this end, they may use mechanisms that clarify the relationship between the verb and the arguments while at the same time allowing for one of the arguments not to be overtly expressed (e.g. verb agreement, the use of both hands simultaneously, shifted attribution of expressive elements, etc.). (Vermeerbergen 1998, 2)
4.2. Semantic roles and animacy Building on earlier studies, Coerts’ (1994a,b) work on NGT is one of the first attempts to explicitly list semantic roles as a mechanism that may influence argument relations.
12. Word order
255
Her objective was to determine whether or not NGT had a preferred constituent order. On this basis, she labelled argument positions for semantic function, including Agent, Positioner (the entity controlling a position), Zero (the entity primarily involved in a State), Patient, Recipient, Location, Direction, etc. Following Dik’s Functional Grammar approach (Dik 1989), Coerts divided texts into clauses and extra-clausal constituents, where a clause was defined as any main or subordinate clause as generally described in traditional grammar. Boyes-Braem et al. (1990) and Volterra et al. (1984) had identified what they referred to as ‘split sentences’, which Boyes-Braem describes as sentences that are broken into two parts, where “the first sentence in these utterances seem to function as ‘setting up a visual context’ for the action expressed in the second sentence” (BoyesBraem et al. 1990, 116). A LIS utterance exemplifying this type of structure is provided in (6); the picture that elicited this utterance showed a woman combing a girl’s hair (Volterra et al. 1984; note that the example is glossed in Italian in the original article). (6)
child seated, mother comb ‘The child is seated, and the mother combs (her hair).’
[LIS]
Coerts (1994a) identified similar structures in her NGT data, and argues that these should be analysed as two separate clauses where the first clause functions as a ‘Setting’ for the second clause. Coerts found that most of the clauses she examined contained two-place predicates where the first argument slot (A1) was typically filled by the semantic Agent argument (in Action predicates), Positioner (in Position predicates), Process or Force (in Process predicates), or Zero (in State predicates). The second argument slot (A2) tended to be filled by the semantic Patient role (in Action, Position, and State predicates) or Direction/Source (in Action, Position, and Process predicates). First arguments were considered more central than second predicates given that “first arguments are the only semantic arguments in one place predicates. That is, semantically defined, there can be no Action without an Agent, no Position without a Positioner, etc., but there can be an Action without a Patient and also a Position without a Location et cetera” (Coerts 1994a, 53). For locative utterances, as in (7) below, the general pattern identified was A1 V/A2 (Coerts 1994a, 56). In this example, the first argument (car) is signed first, followed by a simultaneous construction with the verbal predicate signed by the right hand and the second argument (the location bridge) by the left hand. (7)
R: (2h)CAR L: R: Agent L: A1 ‘The car goes under
3adrive-cl‘car’3b
[NGT]
bridge ctr.up_____ Agent-Verb Loc V/A2 the bridge’.
This analytical approach mirrors work from the Functional and Cognitive Linguistics fields, which suggests a general tendency within word order across languages, claiming a natural link between form and meaning, with properties of meaning influencing and shaping form (e.g., Tomlin 1986). Of specific interest here is the Animated First Princi-
256
III. Syntax ple, whereby in basic transitive sentences, the most Agent-like element comes first. That is, there is a tendency for volitional actors to precede less active or less volitional participants. Coerts’ findings and that of others have identified this principle for several sign languages (e.g., Boyes-Braem et al. 1990; Leeson 2001; Kimmelman 2011). Significantly, Coerts found that sentence type was relevant to discussion of constituent order in NGT. She writes that From the analyses of the three sentence types, it emerges that the relation between the arguments in a clause can also be expressed by means of a verb inflection, classifier incorporation and lexical marking of the second argument and that the preferred constituent order can be influenced by semantic factors, especially the features Animate/Inanimate and Mobile/Immobile. (Coerts 1994a, 61)
The cross-linguistic study conducted by Saeed, Sutton-Spence and Leeson (2000), which compares BSL and Irish SL, builds on previous work by Volterra et al. (1984) and Coerts (1994a,b). This very small-scale study looked at a set of data elicited following the same elicitation procedure used by Volterra et al. (1984). Saeed, Sutton-Spence, and Leeson report finding the same types of structures as reported in studies on other sign languages that used the same elicitation materials and methodology, including split sentences, which they account for using Coerts’ (1994a,b) implementation of a Functional Grammar framework. Despite such striking similarities across languages, Saeed, Sutton-Spence, and Leeson also report differences between BSL and Irish SL in terms of use of particular features. For example, different patterns emerged with respect to how signers of BSL and Irish SL used simultaneous constructions, with their use being more prevalent among the BSL informants. It was also found that BSL signers preferred to establish contextual information in greater detail than their Irish SL counterparts. Saeed, Sutton-Spence, and Leeson report that BSL and Irish SL seem to share a more similar underlying semantic pattern than suggested at syntactic level alone: they report a high degree of consistency in the relationship between animacy and focus in both languages. This was evidenced by the fact that the more animate entities tended to be signed by the dominant hand in simultaneous constructions, and Ground elements were introduced before Figure elements in locative constructions. Finally, the authors note that constituent order, particularly in relation to the use of devices like simultaneity, seems more fixed in the BSL data examined than in the Irish SL data.
4.3. Morphosyntactic and syntactic factors Above, we have already seen that semantic factors such as reversibility and animacy can have an influence on word order in at least some sign languages. In addition, it has been found in a number of studies that morphosyntactic factors can also play a role. In particular, a different word order may be observed with verbs that carry certain morphological markers, such as agreement, classifiers, or aspect morphology. Chen Pichler (2001) subsumes the different types of markings under the term ‘re-ordering morphology’. First, it has been observed that in some sign languages, plain verbs favour SVO order while agreeing (or indicating) verbs favour SOV order; this pattern has been
12. Word order
257
described for VGT (Vermeerbergen et al. 2007), LSB (de Quadros 1999), and Croatian Sign Language (HZJ, Milković/Bradarić-Jončić/Wilbur 2006), among others. Secondly, in many sign languages, classifier constructions behave differently with respect to word order; see, for instance, the simultaneous constructions involving classifiers in (2). Finally, verbs that are modified to express aspect (e.g. by means of reduplication) may appear in a different position. In ASL and RSL, for instance, aspectually modified verbs usually appear clause-finally while the basic word order is SVO (Chen Pichler 2001; Kimmelman 2011). With respect to the impact of re-ordering morphology, Chen Pichler (2008, 307) provides the examples in (8) from her corpus of acquisition data (both examples were produced by 26-months old girls). The verb in (8a) carries aspectual inflection, the verb in (8b) combines with a Handling classifier; both verbs appear sentence-finally. (8)
a. cat searchaspect ‘I’m looking and looking for the cat.’ b. hey+ bag indexbag pick-up-by-handle ‘Hey (waving to get attention), pick up the bag.’
[ASL]
Returning to the question of basic word order, it could be argued that re-ordering morphology increases the morphological markedness of the verb. Hence, according to criterion (iv) in section 1.3, the alternative structures observed with morphologically marked verbs would not be considered basic. Yet another phenomenon that complicates the identification of basic word order is doubling. We shall not discuss this phenomenon in detail but only point out that in many sign languages, verbs in particular are commonly doubled (see chapter 14, Sentence Types, for doubling of wh-words). If the resulting structure is SVOV, then it is not always possible to determine whether the basic structure is SVO or SOV, that is, which of the two instances of the verb should be considered as basic (see Kimmelman (2011) for on overview of factors potentially influencing word order in sign languages).
4.4. Summary: the impact of modality One of the key questions underpinning recent work on word order is whether modality effects are responsible for the reduced range of structural variation found in sign languages in comparison to spoken languages. Perniss, Pfau, and Steinbach (2007, 14) note that variation in sign languages is most striking in the realm of syntax given that “the merging of a syntactic phrase structure is highly abstract and independent of phonological properties of the items to be inserted ⫺ no matter whether your theory involves movement operations or not”. The above discussion has made clear that sign languages differ from each other with respect to word order ⫺ and that, at least to some extent, they do so along similar lines as spoken languages do. In addition, a semantic or morphosyntactic factor that may have an influence on word order in one sign language does not necessarily have the same influence in another sign language. Still, we also find striking similarities across sign languages, and a least some of these similarities appear to be influenced by the modality (e.g. Ground-Figure order in locative sentences, simultaneous constructions).
258
III. Syntax
5. Methodological issues: how data type impacts results Finally, it is important to consider the role that data collection plays in the context of studies of word order in sign languages (also see chapter 42). Initially, it might be useful to note that a range of different types of data was utilised and this may have implications for the findings. For example, Coerts (1994a, 66f) urges caution in the interpretation of her results on NGT because the data she analysed was based on a set of sentences elicited in isolation. She notes that while the use of drawings is common practice in sign language research as they minimise the influence of spoken language, the elicitation drawings used in her study (like that of Volterra et al. 1984; BoyesBraem et al. 1990; Vermeerbergen 1998; Johnston et al. 2007; and Saeed/SuttonSpence/Leeson 2000) involved sets of two pictures which were minimally different, clearly contrastive with respect to one constituent. On this basis, she notes that potentially, these contrasts may have influenced the resulting linguistic encoding of sentences, involving constructions that mark contrast. In the same way, Liddell’s early work on ASL was dependent on the translation of English sentences, which potentially allows for an increase in production of English-based or English-like constructions (Liddell 1980). In contrast, the work of Neidle et al. (2000) focussed only on native signers of ASL, and this is an important issue, as only 5⫺10 per cent of sign language users are ‘native’ insofar as they are born into Deaf families where a sign language is the primary language at home. For the remaining 90⫺95 per cent, we might expect that the patterns described by Neidle et al., while relevant, will not be reflected as consistently in their signing as they are in the productions of native signers. Consequently, the issue of frequency patterning as a key indicator of basic word order in a language (Brennan 1994) ⫺ and indeed, for grammar in general ⫺ remains unsolved in this instance. Similarly, we can note that it is often difficult to compare the results of different studies in an unambiguous way. This is due to the fact that the range of data types varies across studies, with varying degrees of other influences on the expected target language output ⫺ be it the influence of a spoken language or the intrusion of contrastive constructions due to an informant’s wish to be maximally comprehensive in their output. We must take into consideration the constraints that applied in the data collation of these studies and the impact this has on the reliability of the findings of these studies. Ultimately, we might argue that the most ‘valid’ results are those that compare and contrast constituent order across a range of data types, and we note that moves towards comparing and contrasting sign languages, backed up by access to multimodal data where annotations can be viewed relative to the source language data, enhances efforts towards identifying a true typology of sign languages (see Pizzuto/Pietrandrea (2001) for discussion of problems with glossing, for example). Indeed, we suggest that as linguists document more unrelated sign languages, thereby facilitating cross-linguistic studies based on a richer data set from a range of related and unrelated sign languages, our understanding of just how different sign languages are from each other will increase, allowing for a true typology of sign languages to unfold. The documentation of sign languages like LIU (Hendriks 2008), Adamorobe Sign Language (AdaSL, Nyst 2007), and the Al-Sayyid Bedouin Sign Language (ABSL, Kisch 2008; Sandler et al. 2005) adds to the pool of languages on the basis of which we may found a considered typology. We must bear in mind that some communities of sign language users are
12. Word order larger and more robust than others (e.g., village sign languages versus national sign languages), a fact that has implications for language transmission and usage, which in turn has potential implications for all kinds of grammatical analysis, including word order (see chapter 24 on village (shared) sign languages). Given the range of theoretical frameworks that have been adopted in considering word order in sign languages, it is practically impossible to compare and contrast findings across all studies: indeed, we refer the reader to Johnston et al.’s (2007) problematisation of cross-linguistic analyses of sign languages. What we can identify here is (i) the major thrust of the range of underlying approaches applied (e.g., descriptive, generative, functionalist, cognitive, semantic, typological); (ii) the languages considered; (iii) the methodologies applied; and (iv) the general period in which the work took place relative to Woll’s three-way distinction. All this can assist in our interpretation of the data under analysis. For example, we have seen that some studies only focus on the semantic analysis of a narrow range of structures (e.g., agreement verbs, transitive utterances, passives, question structures) while others are more broadly based and offer general syntactic patterns for a given language (general valency operations for a language). This has been most notable for research on ASL and BSL, where (to generalise), the consensus seems to be that ASL is an SVO language while BSL is said to be a topic-comment language. A final note on word order in sign languages must address the role that new technologies play. The development of software such as SignStream© and ELAN has allowed for significant strides forward in the development of digital corpora, and the analysis of such data promises to bring forth the potential for quantitative analyses as well as the opportunity for richer and more broadly based qualitative analyses than has been possible to date (see chapter 43, Transcription, for details). Digital corpus work for a range of sign languages including Auslan, BSL, Irish SL, LSF, NGT, VGT, and SSL is now underway. Neidle et al. (2000) employed SignStream© in their analysis of data, which allowed them to pinpoint the co-occurrence of non-manual features with manual features in a very precise way. Other syntactic work using SignStream© includes that of Cecchetto, Geraci, and Zucchi (2009). Similarly, work in ELAN has allowed for closer analysis of both the frequency of structures and the co-occurrence of structures, and promises to facilitate a quantum leap forward in terms of analysis and sharing of data. One of the main challenges is to ensure that the analysis of less well-supported sign languages is not left behind in this exciting digital period.
6. Conclusion This chapter has provided a bird’s eye view of key issues relating to word order and sign languages. Following Bouchard and Dubuisson (1995), we identified three aspects important to word order: (i) a functional aspect; (ii) an articulatory aspect; and (iii) the presumption of the existence of a basic word order. We outlined the relationship between signs and words before providing a historically-based survey of research on word order in sign languages, following Woll’s (2003) identification of three important phases of research: the first concentrating on similarities between sign and spoken languages; the second focussing on the visual-gestural modality of sign languages; and
259
260
III. Syntax the third switching the emphasis to typological studies. We touched on the importance of such issues as non-manual features, simultaneity, and pragmatic processes like topicalisation and pointed out that the available studies on word order are embedded within different theoretical frameworks (including Functional Grammar, Cognitive Grammar, and Generative Grammar). We noted that over time, work on word order issues in sign languages has become more complex, as issues such as simultaneity, iconicity, and gesture in sign languages were included in the discussion. Similarly, as more and more unrelated sign languages are analysed, a more comprehensive picture of the relationship between sign languages and of the striking similarity of form and function at the non-manual level for certain structures (such as interrogatives) has emerged. However, we also noted that, due to the lack of a coherent approach to the description and analysis of data across sign languages, no clear claims regarding a typology of word order in sign languages can yet be made. Finally, we saw that new technologies promise to make the comparison of data within and across sign languages more reliable, and we predict that the age of digital corpora will offer new insights into the issue of word order in sign languages.
7. Literature Armstrong, David F./Wilcox, Sherman E. 2007 The Gestural Origin of Language. Oxford: Oxford University Press. Baker-Shenk, Charlotte/Cokely, Dennis 1980 American Sign Language ⫺ A Teacher’s Resource Text on Grammar and Culture. Washington, DC: Gallaudet University Press. Bergman, Brita/Wallin, Lars 1985 Sentence Structure in Swedish Sign Language. In: Stokoe, William C./Volterra, Virginia (eds.), Proceedings of the III rd International Symposium on Sign Language Research. Silver Spring: Linstok Press, 217⫺225. Bouchard, Denis/Dubuisson, Colette 1995 Grammar, Order and Position of Wh-Signs in Quebec Sign Language. In: Sign Language Studies 87, 99⫺139. Bouchard, Denis 1997 Sign Languages and Language Universals: The Status of Order and Position in Grammar. In: Sign Language Studies 91, 101⫺160. Boyes-Braem, Penny/Fournier, Marie-Louise/Rickli, Francoise/Corazza, Serena/Franchi, MariaLouisa/Volterra, Virginia 1990 A Comparison of Techniques for Expressing Semantic Roles and Locative Relations in Two Different Sign Languages. In: Edmondson, William H./Karlsson, Fred (eds.), SLR 87: Papers from the Fourth International Symposium on Sign Language Research. Hamburg: Signum, 114⫺120. Brennan, Mary 1994 Word Order: Introducing the Issues. In: Brennan, Mary/Turner, Graham H. (eds.), Word Order Issues in Sign Language. Working Papers. Durham: International Sign Linguistics Association, 9⫺46. Cecchetto, Carlo/Geraci, Carlo/Zucchi, Sandro 2009 Another Way to Mark Syntactic Dependencies. The Case for Right Peripheral Specifiers in Sign Languages. In: Language 85(2), 1⫺43.
12. Word order Chen Pichler, Deborah C. 2001 Word Order Variation and Acquisition in American Sign Language. PhD Dissertation, University of Connecticut. Chen-Pichler, Deborah C. 2008 Views on Word Order in Early ASL: Then and Now. In: Quer, Josep (ed.), Signs of the Time: Selected Papers from TISLR 8. Hamburg: Signum, 293⫺318. Chomsky, Noam 1995 The Minimalist Program. Cambridge, MA: MIT Press. Coerts, Jane 1994a Constituent Order in Sign Language of the Netherlands. In: Brennan, Mary/Turner, Graham H. (eds.), Word Order Issues in Sign Language. Working Papers. Durham: International Sign Linguistics Association, 44⫺72. Coerts, Jane 1994b Constituent Order in Sign Language of the Netherlands and the Functions of Orientations. In: Ahlgren, Inger/Bergman/Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure. Durham: ISLA, 69⫺88. Cuxac, Christian 2000 La Langue des Signes Française (LSF). Les Voies de l’Iconicité. Paris: Ophrys. Deuchar, Margaret 1983 Is BSL an SVO Language? In: Kyle, Jim/Woll, Bencie (eds.), Language in Sign. London: Croom Helm, 69⫺76. Dik, Simon C. 1989 The Theory of Functional Grammar. Dordrecht: Foris. Dryer, Matthew S. 2007 Word Order. In: Shopen, Timothy (ed.), Language Typology and Syntactic Description. Vol. I: Clause Structure (2nd Edition). Cambridge: Cambridge University Press, 61⫺131. Dudis, Paul 2004 Body Partitioning and Real-space Blends. In: Cognitive Linguistics 15(2), 223⫺238. Engberg-Pedersen, Elisabeth 1994 Some Simultaneous Constructions in Danish Sign Language. In: Brennan, Mary/Turner, Graham H. (eds.), Word-Order Issues in Sign Language: Working Papers. Durham: International Sign Linguistics Association, 73⫺87. Fauconnier, Gilles 1985 Mental Spaces. Cambridge, MA: MIT Press. Fauconnier, Gilles 1997 Mappings in Thought and Language. Cambridge: Cambridge University Press. Fischer, Susan D. 1975 Influences on Word Order Change in ASL. In: Li, Charles (ed.), Word Order and Word Order Change. Austin: University of Texas Press, 1⫺25. Fischer, Susan D. 1990 The Head Parameter in ASL. In: Edmondson, William H./Karlsson, Fred (eds.), SLR ’87: Papers from the Fourth International Symposium on Sign Language Research. Hamburg: Signum, 75⫺85. Friedman, Lynn A. 1976 Subject, Object, and Topic in American Sign Language. In: Li, Charles (ed.), Subject and Topic. New York: Academic Press, 125⫺148. Glück, Susanne/Pfau, Roland 1998 On Classifying Classification as a Class of Inflection in German Sign Language. In: Cambier-Langeveld, Tina/Lipták, Anikó/Redford, Michael (eds.), ConSoleVIProceedings. Leiden: SOLE, 59⫺74. Greenberg, Joseph H. (ed.) 1966 Universals of Language. Second Edition. Cambridge, MA: MIT Press.
261
262
III. Syntax Hale, Ken 1983 Warlpiri and the Grammar of Non-configurational Languages. In: Natural Language and Linguistic Theory 1, 5⫺47. Hendriks, Bernadet 2008 Jordanian Sign Language: Aspects of Grammar from a Cross-linguistic Perspective. PhD Dissertation, University of Amsterdam. Utrecht: LOT. Hopper, Paul J./Thompson, Sandra A. 1984 The Discourse Basis for Lexical Categories in Universal Grammar. In: Language 60(4), 703⫺752. Jackendoff, Ray S. 1990 Semantic Structures. Cambridge, MA: MIT Press. Janzen, Terry 1998 Topicality in ASL: Information Ordering, Constituent Structure, and the Function of Topic Marking. PhD Dissertation, University of New Mexico. Janzen, Terry 1999 The Grammaticization of Topics in American Sign Language. In: Studies in Language 23(2), 271⫺306. Janzen, Terry 2005 Perspective Shift Reflected in the Signer’s Use of Space. CDS Monograph No. 1. Dublin, Centre for Deaf Studies, School of Linguistic, Speech and Communication Sciences. Janzen, Terry/O’Dea, Barbara/Shaffer, Barbara 2001 The Construal of Events: Passives in American Sign Language. In: Sign Language Studies 1(3), 281⫺310. Johnston, Trevor/Vermeerbergen, Myriam/Schembri, Adam/Leeson, Lorraine 2007 ‘Real Data are Messy’: Considering Cross-linguistic Analysis of Constituent Ordering in Auslan, VGT, and ISL. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation. Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 163⫺206. Kegl, Judy A./Neidle, Carol/MacLaughlin, Dawn/Hoza, Jack/Bahan, Ben 1996 The Case for Grammar, Order and Position in ASL: A Reply to Bouchard and Dubuisson. In: Sign Language Studies 90, 1⫺23. Kimmelman, Vadim 2011 Word Order in Russian Sign Language: An Extended Report. In: Linguistics in Amsterdam 4. [http://www.linguisticsinamsterdam.nl/] Kisch, Shifra 2008 “Deaf Discourse”: The Social Construction of Deafness in a Bedouin Community. In: Medical Anthropology 27(3), 283⫺313. Kiss, Katalin É. (ed.) 1995 Discourse Configurational Languages. Oxford: Oxford University Press. Kiss, Katalin É. 2002 The Syntax of Hungarian. Cambridge: Cambridge University Press. Langacker, Ronald W. 1991 Foundations of Cognitive Grammar, Vol. II: Descriptive Applications. Stanford, CA: Stanford University Press. LaPolla, Randy J. 1995 Pragmatic Relations and Word Order in Chinese. In: Dowling, Pamela/Noonan, Michael (eds.), Word Order in Discourse. Amsterdam: Benjamins, 297⫺330. Leeson, Lorraine 2001 Aspects of Verbal Valency in Irish Sign Language. PhD Dissertation, Trinity College Dublin. Leeson, Lorraine/Saeed, John I. 2007 Conceptual Blending and the Windowing of Attention in Irish Sign Language. In: Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.), Simultaneity in Signed Languages: Form and Function. Amsterdam: Benjamins, 55⫺72.
12. Word order Leeson, Lorraine/Saeed, John I. 2012 Irish Sign Language. Edinburgh: Edinburgh University Press. Li, Charles N./Thompson, Sandra A. 1976 Subject and Topic: A New Typology of Language. In: Li, Charles N. (ed.), Subject and Topic. New York: Academic Press, 457⫺490. Liddell, Scott K. 1977 An Investigation into the Structure of American Sign Language. PhD Dissertation, University of California, San Diego. Liddell, Scott K. 1980 American Sign Language Syntax. The Hague: Mouton. Liddell, Scott K. 2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press. Massone, María Ignacia/Curiel, Mónica 2004 Sign Order in Argentine Sign Language. In: Sign Language Studies 5(1), 63⫺93. Meir, Irit 1998 Thematic Structure and Verb Agreement in Israeli Sign Language. PhD Dissertation, The Hebrew University of Jerusalem. Milković, Marina/Bradarić-Jončić, Sandra/Wilbur, Ronnie 2006 Word Order in Croatian Sign Language. In: Sign Language & Linguistics 9(1/2), 169⫺ 206. Miller, Chris 1994 Simultaneous Constructions in Quebec Sign Language. In: Brennan, Mary/Turner, Graham H. (eds.), Word Order Issues in Sign Language. Durham: ISLA, 89⫺112. Mithun, Marianne 1987 Is Basic Word Order Universal? In: Tomlin, Russell S. (ed.), Coherence and Grounding in Discourse. Amsterdam: Benjamins, 281⫺328. Nadeau, Marie/Desouvrey, Louis 1994 Word Order in Sentences with Directional Verbs in Quebec Sign Language. In: Ahlgren, Inger/Bergman, Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure: Papers from the Fifth International Symposium on Sign Language Research, Vol. 1. Durham: International Sign Linguistics Association, 149⫺158. Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert G. 2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Structure. Cambridge, MA: MIT Press. Nilsson, Anna-Lena 2010 Studies in Swedish Sign Language. Reference, Real Space Blending, and Interpretation. PhD Dissertation, Stockholm University. Nyst, Victoria 2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, University of Amsterdam. Utrecht: LOT. Pensalfini, Robert 2003 A Grammar of Jingulu: An Aboriginal Language of the Northern Territory. Canberra: Pacific Linguistics. Perniss, Pamela M. 2007 Space and Iconicity in German Sign Language (DGS). PhD Dissertation, Max-Planck Institute for Psycholinguistics, Nijmegen. Perniss, Pamela/Pfau, Roland/Steinbach, Markus 2007 Can’t You See the Difference? Sources of Variation in Sign Language Structure. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 1⫺34.
263
264
III. Syntax Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.) 2007 Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter. Pizzuto, Elena/Pietrandrea, Paola 2001 The Notation of Signed Texts. In: Sign Language & Linguistics 4(1/2), 29⫺45. Quadros, Ronice Müller de 1999 Phrase Structure of Brazilian Sign Language. PhD Dissertation, Pontifícia Universidade Cátolica do Rio Grande do Sul. Risler, Annie 2007 A Cognitive Linguistic View of Simultaneity in Process Signs in French Sign Language. In: Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.), Simultaneity in Signed Languages: Form and Function. Amsterdam: Benjamins, 73⫺101. Saeed, John I./Sutton-Spence, Rachel/Leeson, Lorraine 2000 Constituent Order in Irish Sign Language and British Sign Language ⫺ A Preliminary Examination. Poster Presented at the 7 th International Conference on Theoretical Issues in Sign Language Research (TISLR), Amsterdam. Sallandre, Marie-Anne 2007 Simultaneity in French Sign Language Discourse. In: Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.), Simultaneity in Signed Languages: Form and Function. Amsterdam: Benjamins, 103⫺125. Sandler Wendy/Meir, Irit/Padden, Carol/Aronoff, Mark 2005 The Emergence of Grammar: Systematic Structure in a New Language. In: Proceedings of the National Academy of Sciences 102(7) 2661⫺2665. Sandler, Wendy/Lillo-Martin, Diane 2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press. Shaffer, Barbara 2004 Information Ordering and Speaker Subjectivity: Modality in ASL. In: Cognitive Linguistics 15(2), 175⫺195. Sze, Felix Y.B. 2003 Word Order of Hong Kong Sign Language. In: Baker, Anne/Bogaerde, Beppie van den/ Crasborn, Onno (eds.), Cross-linguistic Perspectives in Sign Language Research (Selected Papers from TISLR 2000). Hamburg: Signum, 163⫺191. Taub, Sarah 2001 Language from the Body: Iconicity and Metaphor in American Sign Language. Cambridge: Cambridge University Press. Taub, Sarah/Galvan, Dennis 2001 Patterns of Conceptual Encoding in ASL Motion Descriptions. In: Sign Language Studies 1(2), 175⫺200. Thorvaldsdottir, Gudny 2007 Space in Icelandic Sign Language, MPhil Dissertation, School of Linguistic, Speech and Communication Sciences, Trinity College Dublin. Tomlin, Russell S. 1986 Basic Word Order: Functional Principles. London: Croom Helm. Valli, Clayton/Lucas, Ceil/Mulrooney, Kristin J. 2006 Linguistics of American Sign Language. Fourth Edition. Washington, DC: Gallaudet University Press. Vermeerbergen, Myriam 1998 Word Order Issues in Sign Language Research: A Contribution from the Study of Flemish-Belgian Sign Language. Paper Presented at the 6 th International Conference on Theoretical Issues in Sign Language Research (TISLR), Washington, DC. Vermeerbergen, Myriam/Leeson, Lorraine 2011 European Sign Languages ⫺ Towards a Typological Snapshot. In: Auwera, Johan van der/Kortmann, Bernd (eds.), Field of Linguistics: Europe. Berlin: Mouton de Gruyter, 269⫺287.
13. The noun phrase
265
Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.) 2007 Simultaneity in Signed Languages: Form and Function. Amsterdam: Benjamins. Vermeerbergen, Myriam/Van Herreweghe, Mieke/Akach, Philemon/Matabane, Emily 2007 Constituent Order in Flemish Sign Language (VGT) and South African Sign Language (SASL). In: Sign Language & Linguistics 10(1), 25⫺54. Volterra, Virginia/Laudanna, Alessandro/Corazza, Serena/Radutzky, Elena/Natale, Francesco 1984 Italian Sign Language: The Order of Elements in the Declarative Sentence. In: Loncke, Filip/Boyes-Braem, Penny/Lebrun, Yvan (eds.), Recent Research on European Sign Language. Lisse: Swets and Zeitlinger, 19⫺48. Wilbur, Ronnie B. 1987 American Sign Language: Linguistic and Applied Dimensions. Second Edition. Boston, MA: Little Brown. Wilcox, Phyllis 2000 Metaphor in American Sign Language. Washington, DC: Gallaudet University Press. Wilcox, Sherman 2004 Cognitive Iconicity: Conceptual Spaces, Meaning and Gesture in Sign Languages. In: Cognitive Linguistics 15(2), 119⫺147. Woll, Bencie 2003 Modality, Universality and the Similarities Across Sign Languages: An Historical Perspective. In: Baker, Anne/Bogaerde, Beppie van den/Crasborn, Onno (eds.), Crosslinguistic Perspectives in Sign Language Research. Selected Papers from TISLR 2000. Hamburg: Signum, 17⫺27.
Lorraine Leeson and John Saeed, Dublin (Ireland)
13. The noun phrase 1. 2. 3. 4. 5. 6. 7.
Introduction Characteristics of this modality with consequence for noun phrase structure What’s in a noun phrase? A closer look inside Number: expression of plurality DP-internal word order Conclusion Literature
Abstract This chapter considers, within the context of what is attested crosslinguistically, the structure of the noun phrase (NP) in American Sign Language (ASL). This includes discussion of the component parts of the noun phrase and the linear order in which they occur. The focus here is on certain consequences for the organization and properties of ASL noun phrases that follow from the possibilities afforded by the visual-gestural modality;
266
III. Syntax these are therefore also typical, in many cases, of noun phrases in other sign languages, as well. In particular, the use of space for expression of information about reference, person, and number is described, as is the use of the non-manual channel for conveying linguistic information. Because of the organizational differences attributable to modality, there are not always direct equivalents of distinctions that are relevant in spoken vs. sign languages, and controversies about the comparative analysis of certain constructions are also discussed.
1. Introduction We use the term ‘noun phrase’ (or ‘NP’) to refer to the unit that contains a noun and its modifiers, although, following Abney (1987) and much subsequent literature, these phrases would be analyzed as a projection of the determiner node, and therefore, more precisely, as determiner phrases (DPs). The NP in American Sign Language (ASL) has the same basic elements and hierarchical structure as in other languages. There are, however, several aspects of NP structure in sign languages that take advantage of possibilities afforded by the visual-gestural modality. Some relevant modality-specific characteristics are discussed in section 2. Section 3 then examines more closely the components of NPs in ASL, restricting attention to singular NPs. Expression of number is considered in section 4. Section 5 then examines the basic word order of these elements within the NP.
2. Characteristics of this modality with consequence for noun phrase structure 2.1. Use of space to express person and reference in ASL and other sign languages Sign languages generally associate referents with locations in the signing space. For referents physically present, their actual locations are used; first- and second-persons are associated spatially with the signer and addressee, respectively. Referential locations are established in the signing space for non-present third-person referents. See Neidle and Lee (2006) for review of ASL person distinctions, a subject of some controversy; on use of referential space for present vs. non-present referents, see e.g. Liddell (2003). The use of space provides a richer system for referential distinctions than the person distinctions typical of spoken languages. Although referential NPs crosslinguistically are generally assumed in the syntactic literature to contain abstract referential features, sign languages are unique in enabling overt morphological expression of referential distinctions through association of distinct third-person referents with specific locations in the signing space (Kegl 1976 [2003]). Moreover, in contradistinction to spoken languages, sign languages include referential features among the phi- (or agreement) features, i.e. those features that can be
13. The noun phrase
a. Definite determiner b. Pronoun
c. Possessive
267
d. Reflexive
e. Verb agreement ⫺ give start and end positions
Fig. 13.1: Use of spatial locations associated with person and reference in ASL
expressed morphologically on multiple elements in an agreement relationship, either within the NP (a phenomenon frequently described as ‘concord’) or between an NP and other sentential elements that agree syntactically with the NP. For this reason, we refer to these referentially significant locations as phi-locations (following Neidle and Lee 2006), although not all spatial locations ⫺ and not even all types of deictic gestures ⫺ that are used in signing take on this kind of referential significance. Space is used for many distinct functions in sign languages. ASL determiners, pronominals, possessives, and reflexives/intensifiers are produced by pointing to these phi-locations (for pronouns, see chapter 11). As seen in Figure 13.1, different hand shapes distinguish these functions (a phenomenon also found in other sign languages, cf. Bos 1989; Sutton-Spence/Woll 1999; Tang/Sze 2002; Engberg-Pedersen 2003; Alibašić Ciciliani/Wilbur 2006; Johnston/Schembri 2007; Hatzopoulou 2008; Hendriks 2008). These referential locations are also accessed to mark agreement with NPs by other sentential elements. Although not all classes of verbs can occur with overt agreement morphology (see chapter 7 on verb agreement), agreeing verbs such as give, shown in Figure 13.1e, have start and end points that correspond to the phi-locations of the subject and object, respectively. We return to these elements in section 3.
2.2. NP-internal agreement relations in sign and spoken languages The elements enumerated in section 2.1 and illustrated in Figure 13.1 for ASL correspond to those that have been observed crosslinguistically to enter into agreement relations by virtue of the potential for morphological expression of matching phi-features. However, there has been some contention that sign languages do not exhibit ‘agreement’. In particular, Liddell (e.g., 2000a,b) has pointed to modality differences ⫺ such as the fact that the locations in the signing space used referentially do not constitute a finite set of discrete elements, and that such referential features do not enter into agreement in spoken languages ⫺ to suggest that agreement is not involved here. However, even spoken languages differ in terms of the specific features that partake in agreement/concord relations. In some but not all languages, gender agreement or concord may be found; there may be agreement or concord with person and/or number. Person features themselves are also indexicals, as pointed out by Heim (2008, 37):
268
III. Syntax “they denote functions defined with reference to an utterance context that determines participant roles such as speaker and addressee.” What is unusual in sign languages ⫺ attributable to the richness afforded by the use of space for these purposes ⫺ is the greater potential for expression of referential distinctions. (Number features, more limited in this respect in ASL, are considered in section 4.) However, the matching of features among syntactic elements is of essentially the same nature as in other agreement systems. Thus, we analyze the uses of phi-locations as reflexes of agreement.
2.3. Non-manual expression of syntactic information There are also cases in which these same phi-locations that manifest agreement in manual signing may be accessed non-manually. The use of facial expressions and head gestures to convey essential syntactic information, such as negation and question status, is well documented (for ASL, see, e.g., Baker/Padden 1978; Liddell 1980; Baker-Shenk 1983, and Neidle 2000 for discussion and other references). Such expressions play a critical role in many aspects of the grammar of sign languages, but especially with respect to conveying certain types of syntactic information (see also Sandler and LilloMartin (2006), who consider these to be prosodic in nature, cf. also chapter 4 on prosody). Generally these non-manual syntactic markings occur in parallel with manual signing, frequently extending over the logical scope of the syntactic node (functional head) that contains the features expressed non-manually (Neidle et al. 2000). There are cases where phi-features can also be expressed non-manually, most often through head tilt or eye gaze pointing toward the relevant phi-locations. Lip-pointing toward phi-locations is also used in some sign languages (Obando/Elena 2000 discussed Nicaraguan Sign Language). Neidle et al. (2000), Bahan (1996), and MacLaughlin (1997) described cases in which head tilt/eye gaze can display agreement within both the clause (with subject/object) and the NP (with the possessor/main noun), displaying interesting parallels. Thompson et al. (2006) presented a statistical analysis of the frequency of eye gaze in some data collected with an eye tracker that purports to disconfirm this proposal; however, they seriously misrepresent the analysis and its predictions. Further investigation (Neidle/Lee 2006) revealed that the manifestation of agreement through head tilt/eye gaze (Bahan 1996; MacLaughlin 1997; Neidle et al. 2000) is not semantically neutral, but rather is associated with focus. Thus it would appear that what is involved in this previously identified construction is a focus marker instantiated by non-manual expression of the subject and object agreement features.
3. What’s in a noun phrase? A closer look inside In this section, elements that make up the NP will be discussed. The analysis of some of these elements has been a subject of controversy in the literature, and in some rather surprising cases, such as the expressions of plurality, comprehensive descriptions have been lacking. For discussion of word order within NP, we crucially restrict attention to elements occurring within the NP and in the canonical word order. ASL exhibits some flexibility with respect to word order: there are many constructions in which
13. The noun phrase
269
deviations from the base word order occur, as is frequently recoverable from prosodic cues. Those marked orders are excluded from consideration here, as this chapter seeks to describe the basic underlying word order within NP.
3.1. Determiners − definite vs. indefinite − and adverbials Pointing to a location in the signing space can be associated with a range of different functions, including several discussed here as well as the expression of adverbials of location. We gloss this as ix since it generally involves the index finger. (In some very specific situations, the thumb can be used instead. A different hand shape, an open hand, can also be used for honorifics.) Subscripts are used to indicate person (first, second, or third) and potentially a unique phi-location, so that, for example, ix3i and poss3i (the possessive marker shown in Figure 13.1c) would be understood to involve the same phi-location; the use of the same subscript for both marks coreference. This multiplicity of uses of pointing has, in some cases, confounded the analysis of pointing gestures, since if different uses are conflated, then generalizations about specific functions are obscured. Bahan et al. (1995) and MacLaughlin (1997) have argued that the prenominal ix is associated with definiteness and functions as a determiner, whereas the postnominal ix is adverbial and does not display a definiteness restriction. Previous accounts had generally treated the occurrences of prenominal and postnominal indexes as a unified phenomenon. There has been disagreement about whether sign languages have determiners at all, although it has been suggested that these indexes might be definite determiners (Wilbur 1979) or that they are some kind of determiner but lacking any correlation with definiteness (Zimmer/Patschke 1990). However, analysis focusing on prenominal indexes reveals not only a correlation with definiteness, but also a contrast between definite and indefinite determiners. An NP in ASL can contain a prenominal or postnominal ix, or both. In the construction in (1), the DP includes both a prenominal determiner and a postnominal adverbial, not unlike the French or Norwegian constructions shown in (2) and (3). (1)
[ ix3i man ixloci]DP arrive ‘The/that man there is arriving.’
(2)
[ cet homme-là ] ‘that man there’
(3)
[ den mannen der ] ‘that man there’
[ASL] [French] [Norwegian]
The ASL prenominal and postnominal ix, although frequently very similar in form, are nonetheless distinguishable, in terms of: A. Articulatory restrictions. The determiner, occurring prenominally, has a fixed path length, whereas the postnominal index can be modified iconically to depict aspects of the location (e.g., distance, with longer distances potentially involving longer path length). This is consistent with other grammatical function words having a frozen form relative to related adverbials. (Compare the fixed path length of the ASL modal mark-
270
III. Syntax ing future, which has a relatively frozen path length, with the range of articulations allowed for the related temporal adverbial meaning ‘in the future’; for the latter, distance in the future can be expressed iconically through longer or shorter path movements, as discussed in, e.g., Neidle et al. (2000, 78). Thus there is a contrast in acceptability between (4) and (5). (4)
[ ix3i man ixloc“over there” ]DP know president ‘The/that man over there knows the president.’
(5)
* [ ixloc“over there” man ix3i ]DP know president
[ASL]
B. Potential for distinct plural form. Only the prenominal ix can be inflected for plural in the way to be discussed in section 4. This is shown by the following examples (from MacLaughlin 1997, 122). (6)
[ ixplural-arc man ixloc“over there” ]DP know president ‘The/those men over there know the president.’
(7)
* [ ixplural-arc man ixplural-arc ]DP know president
(8)
* [ ixloc“over there” man ixplural-arc ]DP know president
[ASL]
C. Semantic interpretation. The definiteness restriction, to be discussed in the next subsection, is found only with the prenominal ix. Compare examples (9) and (10) below (from MacLaughlin 1997, 117). Sentence (9) is infelicitous unless the man has been previously been introduced into the discourse. (9)
[ ix3i man ]DP arrive ‘The/that man is arriving.’ * ‘A man is arriving.’
[ASL]
(10) [ man ixloci]DP arrive ‘A/the man there is arriving.’ Although the postnominal index is compatible with an indefinite reading, the prenominal index is not.
3.1.1. Correlation with definiteness Since expression of the definite determiner in ASL necessarily identifies reference unambiguously, the packaging of information is such that this determiner carries referential features, features of a kind not associated with definite articles in spoken languages. Giusti (2002) argues, for example, that referential features are associated with prenominal possessives and demonstratives (also intrinsically definite), but not with definite articles. By this definition, the definite determiner in ASL would be categorized as a demonstrative. There is also, however, an ASL sign glossed as that, shown in Figure 13.2, with a somewhat restricted usage, functioning to refer back pronominally to entities previously established in the discourse, or to propositions (which can-
13. The noun phrase not normally be designated by ix). This sign does not often occur prenominally within the noun phrase, as in ‘that man’, although this is sometimes found (possibly as a result of English influence). This sign that can also be contracted with one, to give a sign glossed as that^one, also used pronominally.
Fig. 13.2: that
In usage, the determiner ix is somewhat intermediate between the definite article and demonstrative of English. In fact, an NP such as ‘ix man’ might be optimally translated into English sometimes as ‘the man’ and at other times as ‘that man’. Since expression of the ASL determiner ix necessarily incorporates referential information, it can only be used when the NP referred to is referential, which excludes its use with generics. (As De Vriendt and Rasquinet 1989 observed, sign languages generally do not make use of determiners in generic noun phrases.) Furthermore, it can only be used for referents that already have been associated in the discourse with a phi-location. Thus, the use of ix in ASL is more restricted than the use of definite articles. Furthermore, the definite determiner ix is not required within a definite NP. This is a significant difference when compared with definite articles in spoken languages that have them. Van Gelderen (2007) has an interesting discussion of the transition that has occurred in many languages whereby demonstratives (occurring in specifier position) came to be reanalyzed as definite articles (occurring in the head of DP). The exact status of these definite determiners in ASL is unclear; it is possible that such a transition is in progress. However, like articles in spoken languages (postulated to occur in the head determiner of a DP), in ASL the determiner ix carries overt inflection for the nominal phifeatures of the language (and in sign languages, these include referential features). Also like articles, the definite determiner is often produced without phonological stress and can be phonologically cliticized to the following sign. A stressed articulation of the prenominal ix is, however, possible; it forces a demonstrative reading. So although ASL does not have exact equivalents of English definite articles or demonstratives, it does have a determiner that (1) is correlated with definiteness; (2) occurs, in the canonical surface word order of noun phrases, prenominally; (3) can be phonologically unstressed and can cliticize to the following sign; (4) bears overt agreement inflection; (5) is identical in form to pronouns (as discussed in section 3.2); and (6) occurs in complementary distribution with elements analyzed as occurring in the head of the DP (discussed below).
3.1.2. Distinction between definite and indefinite determiners Unlike the definite determiner, which accesses a point in space, the indefinite determiner in ASL involves articulatory movement within a small region. This general dis-
271
272
III. Syntax
a. Definite determiner (or pronoun)
b. Indefinite determiner (or pronoun)
c. give (him) start and end positions
d. give (someone) start and end positions
Fig. 13.3: Spatial distinction between reference: definite (point) vs. indefinite (region)
Fig. 13.4: Indefinite determiner vs. numeral ‘one’
tinction between definiteness and indefiniteness in ASL, the latter being associated with a region larger than a point, was observed by MacLaughlin (1997). Figure 13.3 illustrates the articulation of the definite vs. indefinite determiner. The latter, glossed as something/one (because when used pronominally, it would be translated into English as either ‘something’ or ‘someone’), is articulated with the same hand shape as the definite determiner, but with the index finger pointed upward and palm facing the signer; there is a small back and forth motion of the hand, the degree of which can vary with the degree of unidentifiability of the referent. The lack of certainty about the identity of the referent is also expressed through a characteristic facial expression illustrated in Figure 13.3, involving tensed nose, lowered brows, and sometimes also raising of the shoulders. When the referent is specific but indefinite (e.g., ‘I want to buy a book’ in a situation where I know which book I want to buy, but you don’t), the sign is articulated as an unstressed version of the numeral one (also illustrated in Figure 13.4), i.e., without the shaking of the hand and head and without the facial expression of uncertainty. There are many languages in which the indefinite article is an unstressed form of the numeral ‘one’ (e.g., Dutch, Greek, French, Spanish, Italian, and even English, historically, among many others). As with indefinite articles in other languages, the sign glossed as something/one also has a quantificational aspect to its meaning.
13. The noun phrase
273
A similar definiteness distinction is found in verbal agreement with the receiver argument of the verb give: compare the end hand shape of the verb give when used to mean ‘give him’ versus ‘give someone’ (with fingers spread, pointing to an area of space larger than the point represented by the fingers coming together), also illustrated in Figure 13.3. Although this kind of marking of indefiniteness as part of manual object agreement is rare, it provides support for this spatial correlation with (in)definiteness. Bahan (1996, 272⫺273) also observed that when eye gaze marks agreement, the gaze used with specific vs. non-specific NPs differs, the former involving a direct gaze to the phi-location, the latter, a somewhat darting gaze generally upward. As with definite determiners in definite NPs, the indefinite determiner is not required in an indefinite NP, as shown in (11). (11) [ (something/one) man ]DP arrive ‘A man is arriving.’
[ASL]
Finally, like definite determiners (4), indefinite determiners can also occur with a postnominal adverbial index (see (1) and (12)). (12) [ (something/one) man ixloc“over there” ]DP arrive ‘A man over there is arriving.’
[ASL]
3.1.3. Analysis of noun phrases in ASL as determiner phrases Work by Bahan, Lee, MacLaughlin, and Neidle has made the standard assumption in the current theoretical literature that the determiner (D) is the head of a DP projection, with the NP occurring as a complement of D. The D head is the locus for the agreement features that may be realized by a lexical element occupying that node, such as a definite determiner. (It is also possible that in ASL, ix ⫺ when functioning as a demonstrative (if demonstrative and non-demonstrative uses are structurally distinct) ⫺ might be analyzed as occurring in the specifier of DP. This is left as an area for future research.) Other elements that may occupy this node will be discussed in the next subsections, including pronouns and the possessive marker glossed as poss. Determiners are in complementary distribution with those elements.
3.1.4. Non-manual expression of phi-features The phi-features associated with the D node can be (but are not always) manifested non-manually by head tilt or eye gaze or both toward the relevant phi-location. This can occur simultaneously with the articulation of the determiner, or these non-manual expressions can spread over the rest of the DP (i.e., over the c-command domain of D). See MacLaughlin (1997, chapter 3) for further details, including ways in which phifeatures can be expressed non-manually in possessive and non-possessive DPs, displaying parallelism with what can occur in transitive and intransitive clauses (Bahan 1996). It is also possible for the non-manual expression of those phi-features to occur in lieu of the manual articulation of the determiner. This can also occur with the pronominal use of ix, as mentioned in section 3.2.2.
274
III. Syntax
3.1.5. Summary Thus ASL, and sign languages more generally, realize definite determiners by gestures that involve pointing to the phi-locations associated with the main noun. Determiners in ASL occur in prenominal position, whereas there is also another use of ix ⫺ distinguishable in its articulatory possibilities from the definite determiner ⫺ in which the ix expresses adverbial information and occurs at the end of the NP. Typical of determiners occurring as head of DP, ix in ASL manifests overt inflection for phi-features (including referential features, despite the fact that such features are not included among phifeatures in spoken languages). Definite determiners in ASL are also often phonologically unstressed and may cliticize phonologically to the following sign. As a result of the fact that they necessarily incorporate referential information (given the deictic nature of the articulation), definite determiners in ASL have a more restricted distribution than definite articles in spoken languages and may function as demonstratives (with phonological stress forcing a demonstrative reading). ASL also has an indefinite determiner related to the sign one. However, determiners are not required in definite or indefinite noun phrases.
3.2. Pronouns 3.2.1. Relation to determiners As previously mentioned, both the indefinite and definite determiner can be used pronominally. Compare (9) and (11) with (13) and (14). (13) ix3i arrive ‘He/she/it is arriving.’
[ASL]
(14) something/one arrive ‘Someone is arriving.’ This is also common in other sign languages (e.g., Danish Sign Language (DSL) and Australian Sign Languages (Auslan), cf. Engberg-Pedersen 2003; Johnston/Schembri 2007, 271; see also chapter 11 on pronouns) as well as many spoken languages (discussed, e.g., in Uriagereka 1992). For example, the definite determiner and pronoun are identical in form in the following Italian examples (Cardinaletti 1994, 199): (15)
La conosco (I) her know
(16)
la ragazza the girl
[Italian]
Since Postal’s (1966) proposal that pronouns are underlyingly determiners, a claim also essential to Abney’s (1987) DP analysis, there have been several different proposals to account for the parallelisms between pronouns and determiners, and for the different types of pronouns found within and across languages in terms of categorical and/or
13. The noun phrase
275
structural distinctions (e.g., Cardinaletti 1994; Déchaine/Wiltschko 2002). ASL has strong pronouns (i.e., pronouns that have the same syntactic distribution as full NPs) that are identical in form with determiners, and MacLaughlin analyzes them as occurring in the head of the DP. The issue of whether there is a null NP occurring as a sister to D within the subject DP of a sentence like (13) is left open by MacLaughlin.
3.2.2. Non-manual expressions of phi-features occurring with (or substituting for) manually articulated pronouns The phi-features associated with a (non first-person) pronoun can also be expressed non-manually by eye gaze toward the intended phi-location. This has been referred to as ‘eye-indexing’ (e.g., Baker/Cokely 1980). Eye gaze can suffice for pronominal reference, occurring in lieu of manual realization of the pronoun. Baker and Cokely observe (1980, 214) that “[t]his eye gaze is often accompanied by a slight brow raise and a head nod or tilt to toward the referent,” that it is quite common for second-person reference, and that it allows for discretion with third-person reference.
3.2.3. Consequences of overt expression in pronouns of referential features The fact that in ASL (and other sign languages) pronouns are referentially unambiguous is not without implications for syntactic constructions in which pronouns are involved. For example, ASL makes productive use of right dislocation, as shown in (17): an unstressed pronoun occurring sentence-finally and referring back to another NP (overt or null) in the sentence. (This has been referred to as ‘subject pronoun copy’, following Padden 1988, although not all constructions that have been described with that term are, in fact, right dislocation, and right dislocation can occur as well with non-subject arguments.) Moreover, the discourse conditions for use of right dislocation appear to be similar in ASL and other languages in which it occurs productively, such as French and Norwegian (Fretheim 1995; Gundel/Fretheim 2004, 188). (17) j-o-h-n arrive ix3i ‘John arrived, him.’
[ASL]
(18) Jean est arrivé, lui. ‘John arrived, him.’
[French]
(19) Iskremen har jeg kjøpt, den. the.ice.cream have I bought it ‘I bought ice cream.’
[Norwegian]
Languages that make productive use of right dislocation typically also allow for the possibility of a right-dislocated full NP, albeit serving a different function: to disambiguate the pronoun to which it refers back, as shown for French in (21). However, given that pronouns in ASL are unambiguous, this does not occur in ASL.
276
III. Syntax (20) * ix3i arrive j-o-h-n ‘He arrived, John.’ (21) Il est arrivé, Jean. ‘He arrived, John.’
[ASL] [French]
Rather than concluding from the ungrammaticality of (20) that ASL lacks right dislocation entirely (as does e.g. Wilbur 1994), we view the absence of disambiguation by full NP right-dislocation in ASL as a predictable consequence of the fact that referential information is overtly expressed by ASL pronouns.
3.3. Possessives The possessive marker is articulated in ASL with an open palm pointing toward the phi-location of the possessor. British Sign Language (BSL) and related sign languages use the closed fist to refer to possession that is or could be temporary, and ix for permanent possession (Sutton-Spence and Woll 1999). For a typological survey of possessive and existential constructions in sign languages, see Zeshan (2008). When the possessor is indefinite (and not associated with any phi-location), a neutral form of the possessive marker is used, with the hand pointing toward a neutral (central) position in the signing space. Syntactically, we analyze this possessive marker, glossed as poss, as occurring in the head D of the DP, and it can ⫺ but need not ⫺ co-occur with a possessor (a full DP) in the specifier position of the larger DP. This is illustrated in examples (22) and (23). (22) [ j-o-h-n [ poss3i [friend]NP ]D’ ]DP ‘John’s friend’
[ASL]
(23) [ [ poss3i [friend]NP ]D’ ]DP ‘his friend’ It is occasionally possible (especially with kinship relations or inalienable possession) to omit the poss sign, as shown in (24) and (25). (24) j-o-h-n (poss3i) mother ‘John’s mother’
[ASL]
(25) j-o-h-n (poss3i) leg ‘John’s leg’ When the possessive occurs without an overt ‘possessee’, it typically occurs in a reduplicated form, two quick movements, rather than one, of the open palm toward the phi-location. As also observed by MacLaughlin (1997), this is one typical effect of the phonological lengthening that occurs in constituent- or sentence-final position (Grosjean 1979; Coulter 1993) or in a position immediately preceding a deletion site or a syntactically empty node. There have been several studies of the effects of prosodic prominence and syntactic position on sign production (e.g., Coulter 1990, 1993; Wilbur
13. The noun phrase
277
1999). ASL has phonological lengthening in environments similar to those in which it has been attested in spoken languages (Cooper/Paccia-Cooper 1980; Selkirk 1984). Liddell observed, for example, that a head nod accompanying manual material is often found in constructions involving gapping or ellipsis (Liddell 1980, 29⫺38). Phonological reduplication appears to be another such process that is more likely in contexts where phonological lengthening is expected, i.e., constituent-final position and syntactic positions immediately preceding null syntactic structures. (Frishberg 1978, for example, observed a diachronic change that affected a number of ASL compounds: when one element of a compound was lost over time, there was a compensatory lengthening of the remaining part that took the form of reduplication. So beakCwings evolved into a reduplicated articulation of just the first part of that original compound, giving the current sign for ‘bird’. ) It is presumably not a coincidence that when one finds poss in a DP-final position ⫺ in a situation where there is either (a) no overt head noun, or (b) a marked word order in which the poss marker follows the main noun ⫺ poss is generally reduplicated, as indicated by ‘C’ in the glosses in (26) and (27). (26) Context: ‘Whose book is that?’ Reply: poss 1C ‘Mine.’
[ASL]
(27) ix2 prefer [car poss1C] ‘You prefer my car.’ Similar reduplication is possible with other DP-internal elements, as will be discussed below. Interestingly, when ix3 is used as a personal pronoun, i.e., when it occurs as the sole overt element within a DP, it is not reduplicated. However, when it is used as a demonstrative without an overt head, on the meaning ‘that one’, the pointing gesture typically is reduplicated (providing some evidence that the ix may occupy distinct structural positions in the two cases; cf. section 3.1.3): (28) Context: ‘Which one would you like?’ Reply: ix3iC ‘That one.’
[ASL]
(29) Context: ‘Who do you like?’ Reply: ix3i ‘Him.’
3.4. Reflexives As shown in Figure 13.1d, the reflexive is articulated with the thumb facing upward, thumb pad pointing to the phi-location of its antecedent. (For first-person, the orientation is variable: the pad of the thumb can either be facing toward or away from the signer as the hand makes contact with the signer’s chest.) A reflexive can be used either pronominally (30) as an argument coreferential with an NP antecedent, or as an intensifier, as in (31) and (32).
278
III. Syntax (30) j-o-h-n hurt self3i ‘John hurt himself.’
[ASL]
(31) j-o-h-n self3i arrive ‘John himself is arriving.’ (32) j-o-h-n write story self3i ‘John is writing the story himself.’ When self occurs in a prosodically prominent environment, it can also be produced with a reduplicated motion, of the kind just described for possessive pronouns. Crosslinguistically, there is a distinction between simplex reflexives, such as se in French or seg in Norwegian, and morphologically complex reflexives found in, e.g., English (himCself and herCself) or Auslan (composed of the personal pronoun followed by the sign self (Johnston/Schembri 2007)). See, for example, the discussion in König and Siemund (1999). Although it might appear that ASL self is a simplex form, it is in fact a morphological combination of the reflexive nominal self and the pronominal phi-features. The ASL (pro-)self forms have the syntactic properties that tend to characterize complex anaphors: notwithstanding claims to the contrary by Lillo-Martin (1995) (refuted by Lee et al. 1997), they are locally ⫺ rather than long-distance ⫺ bound, and they are not restricted to subject antecedents.
3.5. Nouns and adjectives The spatial location in which nouns and adjectives are articulated in ASL does not typically convey referential information. However, there are some nouns and adjectives (a relatively limited set) whose articulation can occur in, or oriented toward, the relevant phi-location, as discussed by MacLaughlin (1997). So for example, a sign like house or a fingerspelled name like j-o-h-n can be articulated in (or oriented in the direction of) the phi-location of the referent. See chapter 4 of MacLaughlin (1997) for more detailed description of nouns and adjectives that are articulated either in or oriented toward the relevant phi-location. (See also Rinfret (2010) on the spatial association of nouns in Quebec Sign Language (LSQ).)
3.6. Other elements that are − and are not − found in ASL NPs Given the availability of classifier constructions for rich expression of spatial relations and motion, the use of prepositional phrases is more limited in sign than in spoken languages, within both clauses and noun phrases. It is also noteworthy that nouns in ASL do not take arguments (thematic adjectives or embedded clauses). Constructions that would be expressed in other languages by complex NPs (e.g., ‘the fact that it rained’) require paraphrases in ASL. The information conveyed by relative clauses in languages like English can be expressed instead by use of correlatives ⫺ clauses that occur in sentence-initial position, with a distinctive non-manual marking (traditionally, if inappropriately, referred to as ‘relative clause’ marking) ⫺ rather than by clauses embedded within NP arguments of the sentence. An example is provided in (33).
13. The noun phrase rc (33) cat chase dog ix3i [ eat mouse ]IP ‘The cat that chased the dog ate the mouse.’
279
[ASL]
The non-manual marking described by Liddell (1978) and labeled here as ‘rc’ includes raised eyebrows, a backward tilt of the head, and “contraction of the muscles that raise both the cheeks and the upper lip” (Liddell 2003, 54). Frequently non-manual markings of specificity (e.g., nose wrinkle (Coulter 1978)) are also found. Note, however, that Liddell’s (1977) claims about the syntactic analysis of relative clauses differ from what is presented here. See, e.g., Cecchetto et al. (2006) and chapter 14 for discussion of strategies for relativization in LIS. For further discussion about what can occur in the left periphery in ASL, including correlative clauses, see Neidle (2003).
3.7. Summary This section has surveyed some of the essential components of ASL NPs, restricting attention to singular NPs. We have shown that person/reference features participate in agreement relations within the noun phrase, and we have seen overt morphological inflection instantiating these features in determiners, pronouns, possessive markers, and reflexives. Predicate agreement with noun phrases, by verbs and adjectives (of the appropriate morphological class), also involves morphological expression of these same features. Section 4 examines expression of plurality within noun phrases. Section 5 then considers the canonical word order of elements within the noun phrase.
4. Number: expression of plurality The discussion of the spatial locations associated with referential information has, up to this point, been restricted to noun phrases without any overt marking for plurality. Grammatically, ASL noun phrases (and particular elements within them) do not bear number features of singular vs. plural, but rather are generally either unmarked for number (consistent with either singular or plural interpretations) or overtly marked for plural. Pfau and Steinbach (2006) analyze the plural form as distinguished from the singular by a Ø affix in the cases that are treated here as unmarked for number. Cases where plurals are not explicitly marked as such have often been described in the ASL literature as involving a plurality viewed as a collective (Padden 1988, e.g.). For a more detailed discussion of plurality, see chapter 6.
4.1. Use of space When plurality is expressed through explicit number morphology ⫺ as it is productively for determiners, pronouns, reflexives, and those agreeing verbs that can be so marked (although this is subject to certain restrictions) ⫺ the phi-location is generally represented spatially as an arc (rather than a point), using the same hand shapes that
280
III. Syntax
Point used for referent unmarked for number
Arc used for referent marked as plural
Movement at the end of the verb GIFT to agree with a plural object
Fig. 13.6: Plural object agreement Fig. 13.5: Phi-locations used for unmarked vs. plural 3rd-person referent
Fig. 13.7: Index articulated in an arc to indicate plural
occur for the singular forms illustrated in Figure 13.1. The same general principles discussed earlier apply with respect to the way in which these phi-locations are accessed. This is illustrated schematically in Figure 13.5 and by an example of a plural ix (determiner or pronoun) in Figure 13.7. Plural object agreement, involving a final articulation of the verb with a sweeping motion across the arc associated referentially with the plural object, is shown in Figure 13.6. This can also interact with aspectual markings such as distributive; see MacLaughlin et al. (2000) for details. Thus, when definite determiners, pronouns, possessives, reflexives, and agreeing verbs are overtly marked for plural number, there is a sweeping motion between the endpoints of the plural arc (rather than the pointing motion described in section 2) but utilizing the same hand shapes as for the singular forms illustrated in Figure 13.1. Thus, like the person features and referential features discussed earlier, number features (and specifically, plurality), when present, also have a spatial instantiation; however, plurality is associated not with a point but with an arc-like region of space.
4.2. Marking of plurality on nouns and adjectives within noun phrases There has not been a comprehensive account of plural formation of ASL, but Pfau and Steinbach (2005, 2006) give a comprehensive overview of plural formation in German Sign Language (DGS) and discuss modality-specific and typological aspects of the expression of plural in sign languages. A few generalizations about the marking of plurality on nouns in ASL are contained in Wilbur (1987) and attributed to the unpublished Jones and Mohr (1975); Baker and Cokely (1980, 377) list sentence, language, rule, meaning, specialty-field, area room/box, house, street/way, and statue as allowing an overt plural form formed by a kind of reduplication.
13. The noun phrase
281
The kind of arc that is used for predicate agreement (e.g., for verbs or predicative adjectives) can also mark plurality for a small class of nouns that originated as classifiers, such as box, seen in Figure 13.8. However, most nouns that can be overtly marked for plural ⫺ although this is still a limited set ⫺ are so marked through reduplicative morphology.
Fig. 13.8: Plural of box, articulated along an arc
For example, the singular and plural of way are illustrated in Figure 13.9a; the latter has a horizontal translation between the two outward movements. When a bisyllabic singular form is pluralized, the resulting form does not increase in overall number of syllables, but remains bisyllabic: consisting of a single syllable ⫺ reduced from the singular form ⫺ which is reduplicated, with the horizontal translation characteristic of non body-anchored signs. This is shown in Figure 13.9b for poster; the singular is produced with two small outward movements at different heights relative to the signer, whereas the plural involves two downward movements, separated by a horizontal translation, between the positions used for each of the two movements in the singular. Perry (2005) examined the morphological classes for which plurals overtly marked in this way are possible. She found some variation among ASL signers in terms of which signs have distinct plural forms, as well as the exact form(s) that the plural could take. She presented an Optimality Theoretic account of some of the principles that govern how a reduplicated plural can be related to a mono- or bi-syllabic singular form. What is perhaps surprising, however, is that use of the overtly plural form (even when a distinct plural form exists) is not obligatory for a noun that is semantically plural. Whereas the
(a) way
(b) poster
Singular: Plural: sequence of one move- two movements ment
Singular: two outward movements
Plural: two downward movements
Fig. 13.9: Unmarked (singular) vs. plural forms of sign (a) way and (b) poster
282
III. Syntax
-------- cop (first articulation)----------------|----------------(reduplication)-----------------From a sentence meaning: ‘The cop pulled behind the car …’
---------------------- other ------------------------------- ------------------ cop ------------------From a sentence meaning: ‘Another cop pulled the car over.’ Fig. 13.10: cop signed with (above) and without (below) reduplication
plural form of poster (articulated with a reduplicated motion) is unambiguously plural in (35), (34) can be interpreted to refer to one or more posters. (34) ix1 like poster ‘I like (a/the) poster(s).’
[ASL]
(35) ix1 like poster-pl ‘I like (the) posters.’ Moreover, consistent with the observation in section 3.3 that reduplication may be correlated with prosodic prominence and length, the reduplicated (overtly plural) form is more likely to be used in prosodically prominent positions (e.g., for constituent- or sentence-final nouns, or those that receive stress associated with pragmatic focus). These same conditions appear to correlate with the likelihood of use of reduplicated forms for singulars that can optionally occur as reduplicated (e.g., cop, boy) (Neidle 2009). Compare the examples in Figure 13.10, taken from a story by Ben Bahan. In the first, with prosodic prominence on cop, it is articulated with a reduplicated motion; in the second, where the focus is on other, it is not. Although almost all seemingly singular forms are simply unmarked for number (and therefore compatible with either a singular or plural reading), there are a few cases of a real distinction between singular and plural: e.g., child vs. children, person vs. people. In such cases, the plural is irregular, in that it is not formed from the singular by addition of regular reduplicative plural morphology. The difference in the behavior of inherently singular nouns, as compared with nouns simply unmarked for plurality, will be demonstrated in section 4.3.
13. The noun phrase
283
4.3. Lack of concord Within a noun phrase, an overt expression of plurality does not normally occur on more than one element. As observed by Pfau and Steinbach (2006), there are also other sign languages, including DGS, in which plurality can be overtly expressed only once within a noun phrase, as in spoken Hungarian and Turkish. They note that not all sign languages have restrictions on NP-internal number agreement (Hausa Sign Language and Austrian Sign Language (ÖGS) do not). If there is some other semantic indicator of plurality ⫺ e.g., a numeral or quantifier such as many, few, etc. ⫺ then overt plural morphology on the main noun is superfluous. Similarly, if a plural NP contains both a definite determiner and a noun that has distinct plural form, the plurality is overtly marked on one or the other but not both, as illustrated by the following noun phrases: (36) [ many poster ] ‘many posters’
[ASL]
(37) [ three poster ] ‘three posters’ (38) ?* [many poster-pl ] ‘many posters’ (39) ?* [ three poster-pl ] ‘three posters’ (40) [ ix3pl-arcposter ] ‘the/those posters’ (41) [ ix3i poster-pl ] ‘the/those posters’ (42) ?* [ix3pl-arc poster-pl ] ‘the/those posters’ In many spoken languages, grammatical number (singular vs. plural) is a phi-feature that triggers obligatory agreement/concord within noun phrases. However, in ASL it appears that, although there is the possibility of overtly marking plurality, multiple indicators of plurality within a single noun phrase are not needed. Thus, when a noun like poster is unmarked for number, it is consistent with either a singular or plural interpretation, which can be determined contextually. In contrast, in (41), where there is an overt morphological expression of plurality, the head noun, and therefore the noun phrase as a whole, are marked as plural. Overt plural marking on adjectives in ASL through reduplication is extremely rare. However, at least one adjective, different, can bear plural (reduplicative) inflection. Consistent with the above observation, overt marking for plurality normally occurs on only one element within a noun phrase. In an NP such as [ different language ] referring to ‘different languages’, one or the other of those elements can occur in the plural form (with reduplication most likely on the element that is prosodically prominent), but not both.
284
III. Syntax However it is worth noting that, in ASL at least (although this appears to be different from DGS, based on Pfau and Steinbach 2006, 170), there is not an absolute prohibition against multiple expressions of plurality within an NP. An irregular plural form such as children is related to a form child that is intrinsically singular. Thus a word like many or three could only by followed by the plural form children, not by the singular child, which would be semantically incompatible. This is true for other singular/plural pairs, in which the plural does not contain regular plural morphology (e.g., people). Compare the following phrases with those presented above: (43) * [ many child ] ‘many children’
[ASL]
(44) * [ three child ] ‘three children’ (45) [ many children] ‘many children’ (46) [ three children] ‘three children’ (47) * [ ix3pl-arc child ] ‘the/those children’ (48) [ ix3i children ] ‘the/those children’ Thus, some nouns in ASL have overt plurals, many (but not all) formed through regular plural inflection involving reduplication (e.g., way-pl, poster-pl). An even smaller number of ASL nouns have forms that are intrinsically singular (e.g., child[sg], person[sg]). Nouns unmarked for number (e.g., poster) are compatible with either singular or plural interpretations, subject to a strong preference to avoid redundant expression of plurality within NPs when it is possible to do so.
5. DP-internal word order As has been observed going back to Greenberg’s (1963) ‘Universal 20’ ⫺ and as has been subsequently considered within more recent theoretical frameworks (Hawkins 1983; Cinque 2005) and also within the context of sign language research (Zhang 2007) ⫺ when demonstratives, numerals, and adjectives occur prenominally (as they do in their canonical order in ASL), they universally occur in this order: Demonstrative > Numeral > Adjective > Noun. ASL is no exception. Worth noting, however, are phenomena involving numeral incorporation, to be discussed below.
5.1. Expression of quantity There is obligatory numeral incorporation of ix with numerals (which would have been expected to occur immediately following ix). English phrases like ‘we three’ or ‘the
13. The noun phrase two of them’ are expressed by single signs in ASL (as described, for example, by Baker and Cokely (1980, 370)). The supinated hand (i.e., palm up) with the numeral hand shape 2 shakes back and forth between two referents; the hand shapes of 3, 4, or 5 circle once or twice in a movement either inclusive or exclusive of the signer. The sign we-two when it includes the addressee is signed using a back and forth motion of the 2 hand shape. This kind of numeral incorporation has also been described for Croation Sign Language (HZJ), BSL, and other sign languages (e.g., Alibašić Ciciliani/Wilbur 2006; see also chapters 6 and 11). It is also not uncommon for specific nouns to undergo incorporation with numerals (which would otherwise have been expected to occur immediately before them); for information about numeral incorporation in Argentine Sign Language (LSA) and Catalan Sign Language (LSC), see Fuentes et al. (2010). In ASL the numerals 1 through 9 (smaller numbers doing this more commonly than larger ones) can be melded into signs of time and money, for example: two-hours, three-days, four-weeks, five-dollars, six-months, seven-years-old, time-eight (8:00), nine-seconds. Sutton-Spence and Woll (1999) give examples of the same incorporation in BSL (£3, three-yearsold), with five being the highest numeral that can be incorporated (and this form is rare compared to the lower numerals). For excellent examples and illustrations of the use of numerals and quantifiers in various constructions, see also Numbering in American Sign Language (DawnSignPress 1998). ASL also has a variety of quantifiers that can also be used, although those will not be discussed here. See (Boster 1996) for discussion about possible variations in word order that have been claimed to occur with quantifiers and numerals. ASL can also make use of classifier constructions to convey notions of quantity (see chapter 8). This can be done through classifiers that express a range of types of information about such things as quantity, form, and spatial distribution of objects. There are also cases where numerals incorporate with classifiers, giving rise to what have been called ‘specific-number classifiers’ (Baker/Cokely 1980, 301), which represent a specific number of people or animals through the use of the hand shapes corresponding to numerals.
5.2. Ordering of adjectives ASL has both prenominal and postnominal adjectives within the noun phrase, and there are some significant differences between them. There may also be differences in usage among signers, with some generational differences having been reported. Padden (1988) reported a preference for postnominal adjectives; Gee and Kegl (1983) reported that older signers showed a preference for postnominal adjectives. Adjectives in ASL that occur predicatively (i.e., that are in the clause, but not contained within a noun phrase) exhibit properties distinct from those that occur NPinternally. As observed by MacLaughlin (1997), only predicative adjectives can inflect for aspect and agreement in the same way that verbs can. As analyzed by MacLaughlin (1997, 186): Prenominal adjectives are … attributive modifiers, occurring in the specifier position of a functional projection above NP (following Cinque 1994), while postnominal adjectives are predicative modifiers, right-adjoined at an intermediate position in the DP projection.
285
286
III. Syntax Prenominal (but not postnominal) adjectives in ASL are strictly ordered, and the order is comparable to that found in English and discussed by Cinque (1994) as attested in many languages. This is illustrated by MacLaughlin’s examples showing the contrast between the prenominal adjective sequences in (49) and (50) and the postnominal sequences in (51) and (52). When the adjectives occur prenominally, the NP is not well-formed if red precedes big, whereas postnominally, either word order is allowed (examples from MacLaughlin 1997, 193). (49) [ big red ball ixadvi]DP beautiful ‘The big red ball over there is beautiful.’
[ASL]
(50) * [ red big ball ixadvi]DP beautiful (51) [ ball red big ixadvi]DP beautiful ‘The ball that is red and big over there is beautiful.’ (52) [ ball big red ixadvi]DP beautiful ‘The ball that is big and red over there is beautiful.’ Certain adjectives can only occur prenominally in canonical word order; for example: basic, true/real, former. Other adjectives are interpreted differently when used prenominally vs. postnominally, such as old (examples from MacLaughlin 1997, 196). (53) [poss1 old friend] ‘my old friend’ (54) [poss1 friend old] ‘my friend who is old’
5.3. Summary Sign languages are subject to the same general constraints on word order as spoken languages. The relative canonical order of demonstratives, numerals, and adjectives that occur prenominally in ASL is consistent with what is found universally. However, it is also true, as previously noted, that ASL allows considerable flexibility with respect to surface word order. Deviations from the canonical word orders, attributable to displacements of constituents from their underlying positions, are frequently identifiable by prosodic cues. See Zhang (2007) for discussion of word order variation. Focusing on Taiwan Sign Language, Zhang investigates the ways in which variations in word order both within a given language and across languages can be derived. See also Bertone (2010) for discussion of noun phrase structure in LIS.
6. Conclusion Sign languages are governed by the same fundamental syntactic principles as spoken languages. ASL includes the same basic inventory of linguistic elements. In particular,
13. The noun phrase we have argued for the existence of both definite and indefinite determiners occurring prenominally in the canonical word order within a DP. Sign languages also exhibit standard syntactic processes, such as agreement, although the specifics of how agreement works are profoundly affected by the nature of spatial representations of reference. In ASL and many other sign languages, referential features, along with person features, are involved in agreement/concord relations. These features are realized morphologically not only on determiners but also on pronouns, possessives, reflexives/intensifiers, and agreement affixes that attach to predicates (including verbs and adjectives), and they can also be realized non-manually through head tilt and eye gaze. In contrast, number features are not among those features that exhibit concord within noun phrases. The base form of most nouns is unmarked for number. Certain nouns allow for plurality to be overtly marked morphologically, through a regular inflectional process that involves reduplication. However, multiple markings of plurality within a noun phrase are strongly dispreferred. Acknowledgements: We are grateful to the many individuals who have participated in the American Sign Language Linguistic Research Project involving research on ASL syntax and in collection and annotation of video data for this research. The research reported here has benefitted enormously from the contributions of Ben Bahan, Lana Cook, Robert G. Lee, Dawn MacLaughlin, Deborah Perry, Michael Schlang, and Norma Bowers Tourangeau. Further information is available from http://www.bu.edu/ asllrp/. This work has been supported in part by grants from the National Science Foundation (IIS-0705749, IIS-0964385, and CNS-04279883).
7. Literature Abney, Steven P. 1987 The English noun phrase in its Sentential Aspect. PhD Dissertation, MIT. Cambrige, MA. Alibašić Ciciliani, Tamara/Wilbur, Ronnie B. 2006 Pronominal System in Croatian Sign Language. In: Sign Language & Linguistics 9, 95⫺132. Bahan, Benjamin 1996 Non-manual Realization of Agreement in American Sign Language. PhD Dissertation, Boston University. Boston, MA. Bahan, Benjamin/Kegl, Judy/MacLaughlin, Dawn/Neidle, Carol 1995 Convergent Evidence for the Structure of Determiner Phrases in American Sign Language. In: Leslie, Gabriele/Hardison, Debra/Westmoreland, Robert (eds.), FLSM VI: Proceedings of the Sixth Annual Meeting of the Formal Linguistics Society of MidAmerica. Bloomington, Indiana: Indiana University Linguistics Club, 1⫺12. Baker, Charlotte/Cokely, Dennis 1980 American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Silver Spring, MD: T. J. Publishers. Baker, Charlotte/Padden, Carol A. 1978 Focusing on the Nonmanual Components of American Sign Language. In: Siple, Patricia (ed.), Understanding Language through Sign Language Research. New York: Academic Press, 27⫺57.
287
288
III. Syntax Baker-Shenk, Charlotte 1983 A Micro-analysis of the Nonmanual Components of Questions in American Sign Language. PhD Dissertation, University of California, Berkeley, CA. Bertone, Carmela 2010 The Syntax of Noun Modification in Italian Sign language (LIS). In: Working Papers in Linguistics 2009, Venezia, Dipartimento di Scienze del Linguaggio. Università Ca’ Foscari, 7⫺28. Bos, Heleen 1989 Person and Location Marking in Sign Language of the Netherlands: Some Implications of a Spatially Expressed Syntactic System. In: Prillwitz, Siegmund/Vollhaber, Tomas (eds.), Current Trends in European Sign Language Research: Proceedings of the 3 rd European Congress on Sign Language Research. Hamburg: Signum, 231⫺246. Boster, Carole Tenny 1996 On the Quantifier-noun Phrase Split in American Sign Language and the Structure of Quantified noun phrases. In: Edmondson, William H./Wilbur, Ronnie B. (eds.), International Review of Sign Linguistics. Mahwah, NJ: Lawrence Erlbaum, 159⫺208. Cardinaletti, Anna 1994 On the Internal Structure of Pronominal DPs. In: The Linguistic Review 11, 195⫺219. Cecchetto, Carlo/Geraci, Carlo/Zucchi, Sandro 2006 Strategies of Relativization in Italian Sign Language. In: Natural Language and Linguistic Theory 24, 945⫺975. Cinque, Guglielmo 1994 On the Evidence for Partial N-Movement in the Romance DP. In: Cinque, Guglielmo/ Koster, Jan/Pollock, Jean-Yves/Rizzi, Luigi/Zanuttini, Raffaella (eds.), Paths Towards Universal Grammar: Studies in Honor of Richard S. Kayne. Georgetown University Press, 85⫺110. Cinque, Guglielmo 2005 Deriving Greenberg’s Universal 20 and Its Exceptions. In: Linguistic Inquiry 36, 315⫺ 332. Cooper, William E./Paccia-Cooper, Jeanne 1980 Syntax and Speech. Cambridge, MA: Harvard University Press. Coulter, Geoffrey R. 1978 Raised Eyebrows and Wrinkled Noses: The Grammatical Function of Facial Expression in Relative Clauses and Related Constructions. In: Caccamise, Frank/Hicks, Doin (eds.), American Sign Language in a Bilingual, Bicultural Context: Proceedings of the Second National Symposium on Sign Language Research and Teaching. Coronado, CA: National Association of the Deaf, 65⫺74. Coulter, Geoffrey R. 1990 Emphatic Stress in ASL. In: Fischer, Susan D./Siple, Patricia (eds.), Theoretical Issues in Sign Language Research, Volume 1: Linguistics. Chicago: University of Chicago Press, 109⫺125. Coulter, Geoffrey R. 1993 Phrase-level Prosody in ASL: Final Lengthening and Phrasal Contours. In: Coulter, Geoffrey R. (ed.), Phonetics and Phonology: Current Issues in ASL Phonology. New York: Academic Press, 263⫺272. DawnSignPress (ed.) 1998 Numbering in American Sign Language: Number Signs for Everyone. San Diego, CA: DawnSignPress. De Vriendt, Sera/Rasquinet, Max 1989 The Expression of Genericity in Sign Language. In: Prillwitz, Siegmund/Vollhaber, Tomas (eds.), Current Trends in European Sign Language Research: Proceedings of the 3 rd European Congress on Sign Language Research. Hamburg: Signum, 249⫺255.
13. The noun phrase Déchaine, Rose-Marie/Wiltschko, Martina 2002 On pro-nouns and other “Pronouns”. In: Coene, Martine/D’Hulst, Yves (eds.), From NP to DP, Volume 1: The Syntax and Semantics of Noun Phrases. Amsterdam: Benjamins, 71⫺89. Engberg-Pedersen, Elisabeth 2003 From Pointing to Reference and Predication: Pointing Signs, Eyegaze, and Head and Body Orientation in Danish Sign Language. In: Kita, Sotaro (ed.), Pointing: Where Language, Culture, and Cognition Meet. Hillsdale, NJ: Lawrence Erlbaum, 269⫺292. Fretheim, Thorstein 1995 Why Norwegian Right-dislocated Phrases are not Afterthoughts. In: Nordic Journal of Linguistics 18, 31⫺54. Frishberg, Nancy 1978 The Case of the Missing Length. In: Communication and Cognition 11, 57⫺68. Fuentes, Mariana/Massone, María Ignacia/Pilar Fernández-Viader, María del/Makotrinsky, Alejandro/Pulgarín, Francisca 2010 Numeral-incorporating Roots in Numeral Systems: A Comparative Analysis of Two Sign Languages. In: Sign Language Studies 11, 55⫺75. Gee, James Paul/Kegl, Judy 1983 Performance Structures, Discourse Structures, and ASL. Manuscript, Hampshire College and Northeastern University. Gelderen, Elly van 2007 The Definiteness Cycle in Germanic. In: Journal of Germanic Linguistics 19, 275⫺308. Giusti, Giuliana 2002 The Functional Structure of noun phrases: A Bare Phrase Structure Approach. In: Cinque, Guglielmo (ed.), Functional Structure in the DP and IP: The Cartography of Syntactic Structures. Oxford: Oxford University Press, 54⫺90. Greenberg, Joseph 1963 Some Universals of Grammar with Particular Reference to the Order of Meaningful Elements. In: Greenberg, Joseph (ed.), Universals of Language. Cambridge, MA: MIT Press, 73⫺113. Grosjean, François 1979 A Study of Timing in a Manual and a Spoken Language: American Sign Language and English. In: Journal of Psycholinguistic Research 8, 379⫺405. Gundel, Jeanette K./Fretheim, Thorstein 2004 Topic and Focus. In: Horn, Laurence R/Ward, Gregory L. (eds.), Handbook of Pragmatic Theory. Oxford: Blackwell, 174⫺196. Hatzopoulou, Marianna 2008 Acquisition of Reference to Self and Others in Greek Sign Language: From Pointing Gesture to Pronominal Pointing Signs. PhD Dissertation, Stockholm University. Hawkins, John 1983 Word Order Universals. New York: Academic Press. Heim, Irene 2008 Features on Bound Pronouns. In: Harbour, Daniel/Adger, David/Béjar, Susana (eds.), Phi Theory: Phi-Features across Modules and Interfaces. Oxford: Oxford University Press, 35⫺56. Hendriks, Bernadet 2008 Jordanian Sign Language: Aspects of Grammar from a Cross-linguistic Perspective. PhD Dissertation, University of Amsterdam. Utrecht: LOT. Johnston, Trevor/Schembri, Adam 2007 Australian Sign Language: An Introduction to Sign Language Linguistics Cambridge: Cambridge University Press.
289
290
III. Syntax Jones, N./Mohr, K. 1975 A Working Paper on Plurals in ASL. Manuscript, University of California, Berkeley. Kegl, Judy 1976 Pronominalization in American Sign Language. Manuscript, MIT. [Reissued 2003, Sign Language & Linguistics 6, 245⫺265]. König, Ekkehard/Siemund, Peter 1999 Intensifiers and Reflexives: A Typological Perspective. In: Frajzyngier, Zygmunt/Curl, Traci S. (eds.), Reflexives: Forms and Functions. Amsterdam: Benjamins, 41⫺74. Lee, Robert G./Neidle, Carol/MacLaughlin, Dawn/Bahan, Benjamin/Kegl, Judy 1997 Role Shift in ASL: A Syntactic Look at Direct Speech. In: Neidle, Carol/MacLaughlin, Dawn/Lee, Robert G. (eds.), Syntactic Structure and Discourse Function: An Examination of Two Constructions in ASL, Report Number 4. Boston, MA: American Sign Language Linguistic Research Project, Boston University, 24⫺45. Liddell, Scott K. 1977 An Investigation into the Syntax of American Sign Language. PhD Dissertation, University of California, San Diego. Liddell, Scott K. 1978 Nonmanual Signals and Relative Clauses in American Sign Language. In: Siple, Patricia (ed.), Understanding Language through Sign Language Research. New York: Academic Press, 59⫺100. Liddell, Scott K. 1980 American Sign Language Syntax. The Hague: Mouton. Liddell, Scott K. 2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press. Lillo-Martin, Diane 1995 The Point of View Predicate in American Sign Language. In: Emmorey, Karen/Reilly, Judy S. (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 155⫺ 170. MacLaughlin, Dawn 1997 The Structure of Determiner Phrases: Evidence from American Sign Language. PhD Dissertation, Boston University, Boston, MA. MacLaughlin, Dawn/Neidle, Carol/Bahan, Benjamin/Lee, Robert G. 2000 Morphological Inflections and Syntactic Representations of Person and Number in ASL. In: Recherches linguistiques de Vincennes 29, 73⫺100. Neidle, Carol 2003 Language Across Modalities: ASL Focus and Question Constructions. In: Linguistic Variation Yearbook 2, 71⫺98. Neidle, Carol 2009 Now We See It, Now We Don’t: Agreement Puzzles in ASL. In: Uyechi, Linda/Wee, Lian Hee (eds.), Reality Exploration and Discovery: Pattern Interaction in Language & Life. Stanford, CA: CSLI Publications. Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert G. 2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Structure. Cambridge, MA: MIT Press. Neidle, Carol/Lee, Robert G. 2006 Syntactic Agreement across Language Modalities. In: Costa, João/Figueiredo Silva, Maria Cristina (eds.), Studies on Agreement. Amsterdam: Benjamins, 203⫺222. Obando, Vega/Elena, Ivonne 2000 Lip Pointing in Idioma de Señas de Nicaragua (Nicaraguan Sign Language). Paper presented at the 7th International Conference on Theoretical Issues in Sign Language Research, July 23rd⫺27th, Amsterdam.
13. The noun phrase Padden, Carol A. 1988 Interaction of Morphology and Syntax in American Sign Language. New York: Garland Publishing. Perry, Deborah 2005 The Use of Reduplication in ASL Plurals. MA Thesis, Boston University, Boston, MA. Pfau, Roland/Steinbach, Markus 2005 Plural Formation in German Sign Language: Constraints and Strategies. In: Leuninger, Helen/Happ, Daniela (eds.), Gebärdensprache. (Linguistische Berichte Sonderheft 13.) Hamburg: Buske, 111⫺144. Pfau, Roland/Steinbach, Markus 2006 Pluralization in Sign and in Speech: A Cross-modal Typological Study. In: Linguistic Typology 10, 135⫺182. Postal, Paul 1966 On So-called “Pronouns” in English. In: Dinneen, Francis P. (ed.), Report of the Seventeenth Annual Round Table Meeting on Linguistics and Language Studies. Washington, D.C.: Georgetown University Press, 177⫺206. Rinfret, Julie 2010 The Spatial Association of Nouns in Langue des Signes Québécoise: Form, Function and Meaning. In: Sign Language & Linguistics 13(1), 92⫺97. Sandler, Wendy/Lillo-Martin, Diane 2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press. Selkirk, Elisabeth O. 1984 Phonology and Syntax: The Relation between Sound and Structure. Cambridge, MA: MIT Press. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language. Cambridge: Cambridge University Press. Tang, Gladys/Sze, Felix Y. B. 2002 Nominal Expressions in Hong Kong Sign Language: Does Modality Make a Difference? In: Meier, Richard P./Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 296⫺319. Thompson, Robin/Emmorey, Karen/Kluender, Robert 2006 The Relationship between Eye Gaze and Verb Agreement in American Sign Language: An Eye-Tracking Study. In: Natural Language and Linguistics Theory 24, 571⫺604. Uriagereka, Juan 1992 Aspects of the Syntax of Clitic Placement in Western Romance. In: Linguistic Inquiry 26, 79⫺123. Wilbur, Ronnie B. 1979 American Sign Language and Sign Systems: Research and Application. Baltimore, MD: University Park Press Wilbur, Ronnie B. 1987 American Sign Language: Linguistic and Applied Dimensions. Boston, MA: CollegeHill Press. Wilbur, Ronnie B. 1994 Foregrounding Structures in American Sign Language. In: Journal of Pragmatics 22, 647⫺672. Wilbur, Ronnie B. 1999 Stress in ASL: Empirical Evidence and Linguistic Issues. In: Language and Speech 42, 229⫺250. Zeshan, Ulrike/Perniss, Pamela (eds.) 2008 Possessive and Existential Constructions in Sign Languages. Nijmegen: Ishara Press.
291
292
III. Syntax Zhang, Niina Ning 2007 Universal 20 and Taiwan Sign Language. In: Sign Language & Linguistics 10, 55⫺81. Zimmer, June/Patschke, Cynthia 1990 A Class of Determiners in ASL. In: Lucas, Ceil (ed.), Sign Language Research: Theoretical Issues. Washington, DC: Gallaudet University Press, 201⫺210.
Carol Neidle and Joan Nash, Boston, Massachusetts (USA)
14. Sentence types 1. 2. 3. 4. 5. 6.
Introduction Polar (yes-no) questions Content (wh) questions Other constructions with wh-phrases Conclusion Literature
Abstract Although sentence types are declaratives, interrogatives, imperatives and exclamatives, this chapter focuses on declaratives and interrogatives, since imperatives and exclamatives have not been systematically studied yet in sign languages. Polar (yes/no) questions in all known sign languages are invariably marked by a special non-manual marker (NMM), although in some sign languages also sentence-final question particles can mark them. Content (wh) questions are an area of possible macrotypological variation between spoken and sign languages. In the overwhelming majority of spoken languages, whphrases either occur at the left edge of the sentence or remain in situ. However, a possible occurrence of wh-phrases at the right periphery is reported in most of the sign languages for which a description of content questions is available, although, for many of them, occurrence of wh-phrases at the left periphery or in situ is also possible. In some analyses, wh-phrases in sign languages access positions not available to wh-phrases in spoken languages, while other analyses deny or minimize this macrotypological difference. An area in which these analyses make different prediction is wh-NMM. Finally, some constructions different from content questions in which wh-signs nonetheless occur are reported in this chapter.
1. Introduction ‘Sentence types’ is a traditional linguistic category that refers to the pairing of grammatical form and conversational use (cf. Sadock/Zwicky 1985). Well-estab-
14. Sentence types lished sentence types in spoken language are declaratives, interrogatives, and imperatives. Another less established sentence type is exclamatives (cf. Zanuttini/ Portner 2003). Since sign languages can be used to make an assertion, to ask a question, to give an order, it is no surprise that they develop grammaticalized forms associated to these conversational uses. However, while the sign language literature contains a considerable body of work on declaratives and interrogatives, research on other sentence types is extremely limited. In fact, no study has been exclusively dedicated to imperatives or exclamatives in any sign language. Sparse and unsystematic information is scattered in works that are devoted to other topics. Baker and Cokely (1980) mention that commands in American Sign Language (ASL) are usually indicated by stress (emphasis) on the verb and direct eye gaze at the addressee. This stress usually involves making the sign faster and sharper. De Quadros (2006) reports work (in Brazilian Portuguese) by Ferreira-Brito (1995) on questions that are marked by a special non-manual marking (NMM) and function as polite command in Brazilian Sign Language (LSB). Zeshan (2003) mentions that Indo-Pakistani Sign Language (IPSL) uses positive and negative particles to express imperatives. Spolaore (2006), a work in Italian, identifies a sign (glossed as ‘hand(s) forward’) that tends to appear in sentence-final position in imperative sentences in Italian Sign Language (LIS). Johnston and Schembri (2007) claim that in Australian Sign Language (Auslan) imperatives the actor noun phrase is often omitted and signs are produced with a special stress, direct eye gaze at the addressee and frowning. While this information indicates that (some) sign languages have developed grammaticalized forms for imperatives, the limited amount of the research does not justify a review of the literature. For this reason, this chapter will be devoted to interrogatives. The properties of declarative sentences in a given language (the unmarked word order, the presence of functional signs, etc.) will be discussed only when this is necessary to show how interrogatives are distinguished from declaratives, for example by a change in the order of signs or in the distribution of NMM. Declarative sentences are also discussed in the chapters devoted to word order (chapter 12) and complex sentences (chapter 16). All three approaches to the study of sign languages that the handbook explores, namely the comparability of sign and spoken languages, the influence of modality on language, and typological variation between sign languages, strongly interact in this chapter. In particular, in the discussion of content questions, conclusions emerging from the typological literature will be reported along with more theoretically oriented analyses concerning specific sign languages.
2. Polar (yes/no) questions Sign languages tend to employ the same strategy to mark polar (yes/no) questions to a notable degree. In fact, polar questions in all known sign languages are invariably marked by a special NMM (for a detailed discussion of NMM, see chapter 4, Prosody). According to Zeshan (2004), the NMM associated with yes/no questions typically involves a combination of several of the following features:
293
294
III. Syntax ⫺ ⫺ ⫺ ⫺ ⫺
eyebrow raise eyes wide open eye contact with the addressee head forward position forward body posture
Tab. 14.1: Research on polar questions in sign languages American Sign Language (ASL), cf. Wilbur and Patschke (1999) Australian Sign Language (Auslan), cf. Johnston and Schembri (2007) Austrian Sign Language (ÖGS) cf. Šarac et al. (2007) Brazilian Sign Language (LSB), cf. de Quadros (2006) British Sign Language (BSL), cf. Sutton-Spence and Woll (1999) Catalan Sign Language (LSC), cf. Quer et al. (2005) Croatian Sign Language (HZJ), cf. Šarac and Wilbur (2006) Flemish Sign Language (VGT), cf. Van Herreweghe and Vermeerbergen (2006) Finnish Sign Language (FinSL), cf. Savolainen (2006) Hong-Kong Sign Language (HKSL), cf. Tang (2006) Israeli Sign Language (Israeli SL), cf. Meir (2004) Indo-Pakistani Sign Language (IPSL), cf. Zeshan (2004) Japanese Sign Language (NS), cf. Morgan (2006) Quebec Sign Language (LSQ), cf. Dubuisson et al. (1991) New Zealand Sign Language (NZSL), cf. McKee (2006) Sign Language of the Netherlands (NGT), cf. Coerts (1992) Spanish Sign Language (LSE), cf. Herrero (2009) Turkish Sign Language (TİD), cf. Zshan (2006)
In many cases, only NMM can differentiate polar questions and declarative sentences. For example, Morgan (2006) reports that in NS a declarative sentence and the corresponding polar question may be distinguished only by the occurrence of a special NMM, namely eyebrow raise, slight head nod and chin tuck on the last word. However, he notes that the index sign may be moved to the sentence-final position in polar questions, as in (2): (1)
index2 book buy ‘You bought a book.’
(2)
pol-q book buy index2 ‘Did you buy the book?’
[NS]
The importance of the eyebrow raise feature should be stressed, since it also discriminates polar questions from content (wh) questions in the many sign languages in which, as we will see in section 3, content questions are marked by eyebrow lowering. Although in other grammatical constructions (like negative sentences and content questions), the scope of non-manual marking can vary significantly both crosslinguistically and language internally, non-manual marking in polar questions shows relatively minor variation. In fact, it typically extends over the whole clause but for signs that are marked by a different non-manual marking (for example topicalized constituents).
14. Sentence types In many sign languages, eyebrow raise marking is shared by polar questions and other grammatical constructions. ASL is a well-documented case. Coulter (1979) observes that eyebrow raise marks any material in left peripheral position. This includes, as further discussed by Wilbur and Patschke (1999), diverse constructions like topics, left dislocated phrases, relative clauses, conditionals, and focused phrases (MacFarlane (1998) contains crosslinguistic data confirming the occurrence of eyebrow raise in a subset of these constructions). After excluding alternative analyses, Wilbur and Patschke conclude that the commonality among all the ASL structures that show eyebrow raise is that this NMM shows up in A-bar positions which are associated with operator features that are [⫺wh]. So, the three distinctive brow positions, raised, furrowed, and neutral, would be each associated with a different operator situation, [⫺wh], [Cwh], and none, respectively. The fact that eyebrow raise is shared by polar questions and the protasis of conditionals introduces a possible complication. In sign languages in which a functional sign corresponding to ‘if’ is not required, distinguishing a question-answer pair introduced by a polar question and a conditional may be difficult. This is so because a questionanswer pair may express the same information as a conditional (cf. the similar meaning of (3a) and (3b)): (3)
a. Does it rain? I go to the cinema. b. If it rains, I go to the cinema.
This raises the possibility that some sign languages might lack conditionals altogether, since they might be functionally replaced by a question-answer pair introduced by a polar question. However, this is unlikely. For one thing, eyebrow raise might be associated to a cluster of NMMs rather than being a single independent feature. Therefore, closer examination might reveal that NMMs associated to hypotheticals and to question-answer pairs are different. Furthermore, Barattieri (2006) identified some devices that can disentangle question-answer pairs as (3a) and genuine conditionals in LIS, a language in which the sign corresponding to if can be easily omitted and eyebrow raise marks both polar questions and (alleged) protases of conditionals. For example, in LIS (as in English) counterfactual conditionals like ‘Had Germany won, Europe would be now controlled by Nazis’ cannot felicitously be replaced by the corresponding question-answer pair ‘Did Germany win? Now Europe is controlled by Nazis’. By using this and similar devices, a polar question and the protasis of a conditional can be distinguished even in languages in which they are marked by the same (or by a similar) non-manual marking. If NMM is the sign language counterpart of intonation (cf. Sandler 1989, among many, others for this proposal), sign and spoken languages do not seem to pattern very differently as far as polar questions are concerned, since intonation (for example, rising intonation at the end of questions) can mark polar questions in spoken languages as well (colloquial English is an example, and Italian is a more extreme one, since a rising intonation is the only feature which can discriminate an affirmative sentence and the corresponding polar question). However, a difference between spoken and sign languages might be at stake here as well. According to the most comprehensive typological source available at the moment of writing (Dryer 2009a), in spoken languages the use of strategies distinct from intonation to mark polar questions is extremely common.
295
296
III. Syntax These strategies include a special interrogative morphology on the verb, the use of a question particle and a change in word order. Sign languages might use strategies other than intonation to a significantly lesser extent than spoken languages do. The only notable exception is the use of sentence-final question particles to mark polar questions in languages like ASL, HKSL, and HZJ. However, even in these languages, question particles complement NMMs as a way to mark questions, rather than fully replacing them. More specifically, ASL eyebrow raise is obligatory on the question particle and may optionally spread over the entire clause (Neidle et al. 2000, 122⫺124). In HKSL, eyebrow raise occurs only on the question particle and cannot spread (Tang 2006, 206). In HZJ, the NMM associated to polar questions spreads over the entire sentence (Šarac/Wilbur 2006, 154⫺156). This notwithstanding, it cannot be excluded that the difference between spoken and sign languages is not a real one but is due to our current limited knowledge of the grammar of the latter. It is possible that there are sign languages which do not use intonation to mark polar questions, but, if so, these have been poorly studied. Similarly, a closer examination of word order and morphology of sign languages that are thought to mark polar questions only with NMM might reveal that they use other strategies as well. Only future research can determine this.
3. Content (wh) questions Content (wh) questions have been investigated in close detail in various sign languages and some controversy arose both about the data and about the possible analyses. A reason why content questions attract much attention is that they might be an area of macrotypological variation between spoken and sign languages. In the overwhelming majority of spoken languages, wh-phrases either occur at the left edge of the sentence or remain in situ. Cases of spoken languages in which wh-phrases systematically occur at the right edge of the sentence are virtually unattested. In WALS Online (cf. Dryer 2009b) only one language (Tennet) is indicated as a potential exception. Considering that WALS Online database covers more than 1200 spoken languages, this generalization is very robust. However, a possible occurrence of wh-phrases at the right periphery is reported in most of the sign languages for which a description of content questions is available, although, for many of them, occurrence of wh-phrases at the left periphery or in situ is also possible. Based on this pattern, various authors have proposed that wh-phrases in sign languages may access positions not available to wh-phrases in spoken languages. Since content questions in ASL have been the first to be analyzed in detail and the following investigation of wh interrogatives has been influenced by this debate, two competing analyses for the ASL questions will be described initially. Later in this chapter, other sign languages will be considered. The leftward movement analysis, mostly due to work by Karen Petronio and Diane Lillo-Martin, is presented in section 3.1. Section 3.2 summarizes the rightward movement analysis, which is systematically defended in Neidle et al. (2000) (from now on, NKMBL). In section 3.3 content questions in LIS are discussed, while section 3.4 summarizes the remnant movement analysis, which is a device that can explain the occurrence of wh-signs in the right periphery
14. Sentence types
297
without assuming rightward movement. Section 3.5 is devoted to the analysis of duplication of the wh-phrase. Section 3.6 concludes the discussion of content questions by drawing a provisory conclusion on the issue of the (alleged) macrotypological variation between spoken and sign languages concerning the position of wh-items.
3.1. The leftward movement analysis for ASL content questions One reason that makes content questions in ASL difficult to interpret is that wh-signs may appear in many different positions, namely in situ, sentence-finally, or doubled in the left and in the right periphery. In (4) this is illustrated with a wh-object, but there is consensus in the literature (Petronio/Lillo-Martin 1997; NKMBL) that the same happens with wh-signs playing other grammatical roles. (4a) indicates the unmarked SVO word order of ASL. It is important to stress that adverbs like yesterday are clausefinal in ASL. This allows us to check if the direct object is in situ (namely, it precedes yesterday) or has moved to the right periphery of the sentence (namely, it follows yesterday). (4b) illustrates a case of doubling of the wh-sign, which surfaces both in the left and in the right periphery. In (4c) the wh-phrase is in situ and, finally, in (4d) the wh-phrase surfaces only in the right periphery. Content questions are accompanied by a specific non-manual marking (wh-NMM), namely a cluster of expressions of the face and upper body, consisting most notably of furrowed eyebrows: (4)
a. john buy book yesterday ‘Yesterday John bought a book.’
[ASL]
wh
b. what john buy yesterday what ‘What did John buy yesterday?’ wh
c. john buy what yesterday ‘What did John buy yesterday?’ wh
d. john buy yesterday what ‘What did John buy yesterday?’ Since rightward movement of wh-elements is crosslinguistically very rare, if existing at all, Petronio and Lillo-Martin assume that wh-movement is universally leftward and explain the pattern in (4) as follows. In (4b) a wh-sign is found in the left periphery, as expected if wh-movement is leftward. As for the fact that the wh-sign is doubled at the right edge, they assume that the wh-double is a clause-final complementizer which occupies the COMP position, much like interrogative complementizers that are found in many SOV languages. Although ASL is SVO, it has been proposed that it was SOV (cf. Fischer 1975), so the placement of the interrogative complementizer at the right edge might be a residue of this earlier stage. Furthermore, Petronio and Lillo-Martin observe that the doubling in (4b) is an instance of a more general phenomenon which occurs with non-wh-signs as well. For example, modal, lexical verbs and quantifiers can be doubled in the right periphery for focus or emphasis (the phenomenon of doubling will be discussed in section 3.5). Since they take wh-doubling in the right periphery to
298
III. Syntax be a case of focalization on par with other cases of doubling, Petronio and Lillo-Martin claim that wh-NMM expresses the combination of wh and Focus features that are hosted in the COMP node of all direct questions. Spreading occurs over the c-command domain of COMP (namely the entire sentence). Cases of in situ wh-signs like (4c) are not surprising since it is not uncommon to find languages displaying both the leftward movement option and the in situ option. The order in (4d) is more difficult to explain if the right peripheral wh-sign is a complementizer, since this question would lack an argument wh-phrase altogether. However, Petronio and Lillo-Martin (following Lillo-Martin/Fischer 1992) observe that ASL allows null wh-words, as in examples like (5): wh
(5)
time ‘What time is it?’
[ASL]
Therefore, they explain the pattern in (4d) by arguing that this sentence contains a null wh-phrase in the object position. A natural question concerns sentences like (6), in which the wh-phrase is found where it is expected if wh-movement is leftward and no doubling is observed (the symbol ‘#’ indicates that the grammaticality status of this sentence is controversial): wh
(6)
#who john hate ‘Who does John hate?’
[ASL]
Unfortunately, there is no consensus on the grammatical status of sentences of this type. For example, Petronio and Lillo-Martin say that they elicited varying judgments from their informants, while NKMBL claim that their informants rejected this type of sentence altogether. Note that, if wh-movement is leftward, at least under the simplest scenario, a question like (6) should be plainly grammatical, much like its translation in English. So, its dubious status is a potential challenge for Petronio and Lillo-Martin’s account. They deal with this issue by arguing that, for stylistic reasons, some signers prefer the position of the head final complementizer to be filled with overt material. So, (6) is disliked or rejected in favor of the much more common structure with doubling exemplified in (4b) above. They support this conjecture by observing that judgments become much sharper when the question with an initial wh-sign and no doubling is embedded under a predicate like wonder, as in (7). They interpret (7) as an indirect question with the order that is just expected under the assumption that wh-movement is leftward: ponder
(7)
i wonder what john buy ‘I wonder what John bought.’
[ASL]
As indicated, sentences like (7) are reported by Petronio and Lillo-Martin not to occur with familiar wh-NMM, but with a NMM consisting of a puzzled, pondering facial expression. Partly for this reason, Neidle et al. (1998) deny that embedded structures marked by this type of NMM are genuine indirect questions.
14. Sentence types
299
Petronio and Lillo-Martin observe that another advantage of their analysis is that it can explain why a full phrase cannot occupy the right peripheral position. For example, structures like (8) are reported by them to be ungrammatical ((8) is marked here with the symbol ‘#’, because this data has been contested as well, as we will see shortly). The ungrammaticality of (8) straightforwardly follows if the clause-final wh-sign is indeed a complementizer (phrases cannot sit in the position of heads, under any standard version of phrase structure theory, like X-bar theory): wh
(8)
#which computer john buy which computer
[ASL]
Summarizing, Petronio and Lillo-Martin, confronted with the complex pattern of ASL wh-questions, give an account that aims at explaining the data by minimizing the difference with spoken languages, in which rightward wh-movement is virtually unattested.
3.2. The rightward movement analysis for ASL content questions Proponents of the rightward movement analysis take the rightward placement of whsigns at face value and claim that wh-movement is rightward in ASL. This analysis has been systematically defended by NKMBL. Of course, the rightward movement analysis straightforwardly explains the grammaticality of examples like (4d), in which the wh-item is clause-final. NKMBL also report examples in which the wh category in the right periphery is a phrase, not a single wh-sign, although this data has been contested by Petronio and Lillo-Martin. For example, informants of NKMBL find a sentence like (8) above fully acceptable. Examples in which the wh-phrase is in situ (cf. (4c)) are also not surprising, since, as already mentioned, many languages with overt wh-movement admit the in situ strategy as well. The hardest cases for the rightward movement analysis are those in which the wh-category is in the left periphery. Banning sentences like (6), which have a dubious status, the only uncontroversial case of left placement of the wh-phrase is in cases of doubling like (4b). NKMBL deal with these cases by assuming that the wh-phrase in the left periphery is a wh-topic. They support this conjecture by observing that wh-topics display the same distributional properties as base generated topics and that their NMM results from the interaction of wh-NMM and of the NMM that marks topics. This proposal faces the potential challenge that not many languages allow whphrases in topic positions. However, NKMBL list some languages that do, so ASL would not be a real exception. One piece of evidence advocated by NKMBL in favor of the hypothesis that the category that sits at the right edge is a wh-phrase (and not a wh complementizer) is the fact that their informants accept questions like (9), in which a complex phrase is rightward moved. As usual, the symbol ‘#’ indicates a disagreement, since Petronio and Lillo-Martin would mark questions with a right peripheral wh-phrase as ungrammatical: wh
(9)
#john buy yesterday which computer
[ASL]
300
III. Syntax NKMBL claim that spreading of wh-NMM over the entire sentence is optional when the wh-phrase occupies the clause-final position (Spec,CP in their account), while it is mandatory when the wh-phrase is in situ. They analyze this distribution as an instance of a more general pattern, which is found with other types of grammatical NMMs (such as the NMMs associated with negation, yes-no questions, and syntactic agreement). NMMs are linked to syntactic features postulated to occur in the heads of functional projections. In all these cases, the domain of NMM is the c-command domain of the node with which NMM is associated. Spreading of the relevant NMM is optional, unless it is required for the purpose of providing manual material with which the NMM can be articulated. Since the node with which the wh-NMM is associated is the head of the CP position, the domain of wh-NMM is the c-command domain of COMP, which corresponds to the entire sentence. The distribution of NMM has been used as an argument both in favor and against the rightward movement analysis. NKMBL claim that the rightward movement analysis is supported by the fact that the intensity of the wh-NMM increases as the question is signed. This is expected if the source of the wh feature occurs at the right edge, as the intensity of wh-NMM is greatest nearest the source of the wh feature and it diminishes as the distance from that node increases. On the other hand, Petronio and Lillo-Martin observe that the generalization that spreading of wh-NMM is optional when the wh-phrase has moved to its dedicated position at the right edge makes a wrong prediction in cases of sentences like (10), which should be acceptable, but are not (the structure is grammatical if the wh-NMM occurs over the entire sentence as in (4b)): wh
(10)
wh
*what john buy yesterday what ‘What did John buy yesterday?’
[ASL]
NKMBL account for the ungrammaticality of (10) by capitalizing on the notion of perseveration, namely the fact that, if the same articulatory configuration will be used multiple times in a single sentence, it tends to remain in place between those articulations (if this is possible). Perseveration, according to NKMBL, is a general phenomenon which is found in other domains as well (for example, in classifier constructions, as discussed by Kegl (1985)). The problem with (10) would be a lack of perseveration, so the sentence would contain a phonological violation. A revised version of the rightward movement analysis has been proposed by Neidle (2002), who claims that the wh-phrase passes through a focus position in the left periphery in its movement towards the Spec,CP position in the right periphery. She shows that this focus position houses not only focused DPs, but also ‘if’, ‘when’, and relative clauses. Neidle supports her analysis by showing that wh-phrases (including non-focused wh-phrases) remain in situ when the focus position in the left periphery, being already filled, cannot be used as an intermediate step. This pattern can be straightforwardly reduced to a case of Relativized Minimality, in the sense of Rizzi (1990). The disagreement found in the literature extends to data that are crucial to the choice between the leftward or the rightward movement analysis for ASL content questions. It is not entirely clear if the source of disagreement is a dialectal variation between consultants of NKMBL and consultants of Petronio and Lillo-Martin (for
14. Sentence types
301
example, a different behavior of native and non-native signers) or some misinterpretation of the data occurred. At the moment of writing, only NKMBL made available a large sample of videos at the website http://www.bu.edu/asllrp/book/ so a direct inspection of all the controversial data is not possible. Given this situation, it seems fair to conclude that the choice between the leftward and the rightward analysis for ASL content questions is still contentious.
3.3. Content questions in LIS The pattern of content questions in LIS, which has been discussed by Cecchetto et al. (2009) (from now on CGZ), bears on the question of the choice between the leftward and the rightward movement analysis. Although, as other sign languages do, LIS has a relatively free word order due to scrambling possibilities, CGZ note that LIS is a head final language. The verb (the head of the VP) follows the direct object and signs as modal verbs (cf. (11)), aspectual markers (cf. (12)), and negation (cf. (13)) follow the verb. If these signs sit in the head of dedicated functional projections, this word order confirms that LIS is head final. (Following CGZ, LIS signs are glossed here directly in English. Videos of LIS examples are available at the web site http:// www.filosofia.unimi.it/~zucchi/ricerca.html.) (11)
gianni apply can ‘Gianni can apply.’
[LIS]
(12)
gianni house buy done ‘Gianni bought a house.’
[LIS]
(13)
gianni maria love not ‘Gianni doesn’t love Maria.’
neg
[LIS]
In LIS a wh-sign sits in the rightmost position in the postverbal area, following any functional sign (the same happens for wh-phrases composed by a wh-determiner and by its restriction, as CGZ show): wh
(14)
cake eat not who ‘Who did not eat the cake?’
(15)
house build done who ‘Who built the house?’
[LIS]
wh
[LIS]
Although wh-words in LIS can remain in situ under a restricted set of circumstances, namely if they are discourse-linked, they cannot sit in the left periphery under any condition. In this sense, the pattern of wh-items is sharper in LIS than in ASL. CGZ adopt a version of the rightward movement analysis inspired by NKMBL’s analysis of ASL and explicitly ask why sign languages, unlike spoken languages, should allow rightward wh-movement. Their answer to this question capitalizes on the pattern
302
III. Syntax of wh-NMM in LIS. In both ASL and LIS the main feature of wh-NMM is furrowing of the eyebrows (incidentally, although this type of NMM for wh-questions is crosslinguistically very common, it is not a sign language universal, since in languages like HZJ and ÖGS the main wh-NMM is not eyebrow positions, but ‘chin up’, which may be accompanied with a head thrust forward (cf. Šarac et al. 2007)). There is an important difference in the distribution of wh-NMM between ASL and LIS, though. In ASL, if wh-NMM spreads, it does so over the entire sentence. In LIS the extent of spreading depends on the grammatical function of the wh-phrase (this is a slight simplification, see CGZ for a more complete description). If the wh-phrase is the subject, wh-NMM spreads over the entire sentence (cf. (16)). However, if wh-phrase is the object, wh-NMM spreads over object and verb, but it is not co-articulated with the subject (cf. (17)): wh
(16)
t gianni see who ‘Who saw Gianni?’
(17)
gianni t eat what ‘What does Gianni eat?’
[LIS]
wh
[LIS]
CGZ interpret this pattern as an indication that wh-NMM in LIS marks the dependency between the base position of the wh-phrase and the sentence-final COMP position (this is indicated in (16)⫺(17) by the fact that wh-NMM starts being articulated in the position of the trace/copy). In this respect, wh-NMM would be similar to whmovement, since both unambiguously connect two discontinuous positions. While whmovement would be the manual strategy to indicate a wh-dependency, wh-NMM would be the non-manual strategy to do the same. Under the assumption that NMM is a prosodic cue that realizes the CWH feature, CGZ relate the LIS pattern to the pattern found in various spoken languages, in which wh-dependencies are prosodically marked (this happens in Japanese, as discussed by Deguchi/Kitagawa (2002) and Ishihara (2002), but also in other spoken languages, which are discussed by Richards (2006)). However, one difference remains between LIS and spoken languages in which wh-dependencies are phonologically marked. Wh-movement and the prosodic strategy of wh-marking do not normally co-occur in spoken languages that prosodically mark wh-dependencies, as wh-phrases remain in situ in these languages (this holds for Japanese and for other languages discussed by Richards). CGZ explain the lack of co-occurrence of prosodic marking and overt movement in spoken languages by saying that this would introduce a redundancy, since two strategies would be applied to mark the very same wh-dependency. As for the fact that wh-NMM and wh-movement do co-occur in LIS, CGZ propose that LIS might be more tolerant of the redundancy between movement and NMM because sign languages, unlike spoken languages, are inherently multidimensional. So, ultimately they explain the possibility of rightward wh-movement as an effect of the different modality. CGZ extend their analysis to ASL. This extension is based on the revised version of the rightward movement analysis proposed by Neidle (2002), according to which the wh-phrase passes through a focus position in the left periphery in its movement to-
14. Sentence types wards Spec,CP in the right periphery. CGZ claim that this intermediate step in the left periphery can explain the different distribution of wh-NMM in LIS and ASL. To date, CGZ’s account is the only attempt to explain the difference between spoken and sign languages in the availability of a position for wh-phrases in the right periphery. However, the hypothesis that NMM can mark discontinuous dependencies is controversial, since it is not supported in sign languages other than LIS. Typically, NMM are associated with lexical material or with the c-command domain of a functional head. So CGZ’s analysis requires a significant revision of the theory of grammatical markers. It remains to be seen if this revision is supported by evidence coming from NMM in sign languages other than LIS.
3.4. Remnant movement analyses If wh-movement is rightward in sign languages, as argued by NKMBL and by CGZ, the problem arises of explaining the difference with spoken languages, in which it is leftward. CGZ tackle this issue, as already mentioned, but another possible approach is that in both sign and spoken languages wh-movement is leftward, but in sign languages it appears to be rightward, due to the systematic occurrence of remnant movement. According to a standard version of the remnant movement analysis, first, the whphrase moves to a dedicated position in the left periphery, say Spec,CP (as in spoken languages). Then the constituent out of which the wh-phrase has moved (the remnant) is moved to its left. This is schematically represented in Figure 14.1. The result is that the location of the wh-phrase on the right side is only apparent because, structurally speaking, the wh-phrase sits in the left periphery. If one adopts the remnant movement analysis, the gap between spoken and sign languages is partially
Fig. 14.1: Schematic representation of the remnant movement analysis for right peripheral wh-phrases.
303
304
III. Syntax filled, since this analysis has been systematically applied to many constructions in spoken languages by supporters of the antisymmetric framework (cf. Kayne 1994, 1998). The antisymmetric framework bans rightward movement and rightward adjunction altogether, whence the widespread use of the remnant movement option to explain the right placement of various categories. For example, Poletto and Pollock (2004) propose a remnant movement analysis for wh-constructions in some Romance dialects that display instances of in situ wh-phrases. The standard version of the remnant movement analysis has been criticized by NKMBL, who claim that it runs into difficult accounting for the distribution of whNMM in ASL. A modified version of the remnant movement analysis is applied to content questions in Indo-Pakistani Sign Language (IPSL) by Aboh, Pfau, and Zeshan (2005) and to content questions in LSB by de Quadros (1999). Aboh and Pfau (2011) extend this type of analysis to content questions in the Sign Language of the Netherlands (NGT). All these analyses are compatible with the antisymmetric framework. According to the modified version, the sentence-final wh-sign is a head in the complementizer system. Since this head sits in the left periphery of the structure, its right placement is derived by moving the entire clause to a structural position to its left. In this account, as in more standard remnant movement analyses, the wh-sign does not move rightward, and its right placement is a by-product of the fact that other constituents move to its left. This version can apply to sign languages in which the right peripheral wh-phrase is a single sign (not a phrase). IPSL, LSB, and NGT all share this property. IPSL content questions will be described here, since they have been used as an argument for a specific theory of clause typing by Aboh and Pfau (2011). Aboh, Pfau, and Zeshan (2005) report that IPSL is an SOV language in which a single wh-sign (glossed as g-wh) covers the whole range of question words in other languages. Its interpretation depends on the context and, if this does not suffice, g-wh may combine with other non-interrogative signs to express more specific meanings. Crucially, g-wh must occur sentence-finally. Examples (18) and (19) are from Aboh and Pfau (2011) (subscripts refer to points in the signing space, i.e. localizations of present referents or localizations that have been established for non-present referents). wh
(18)
father index3 search g-wh ‘What is/was father searching?’
(19)
index3 come g-wh ‘Who is coming?’
[IPSL]
wh
[IPSL]
Wh-NMM (raised eyebrows and backward head position with the chin raised) minimally scopes over g-wh but can extend to successively bigger constituents, with the exclusion of topics. A consequence of this scope pattern is that the whole proposition (or clause) may (but does not need to) be within the scope of wh-NMM. Assuming the modified version of the remnant movement analysis summarized above, g-wh is a complementizer, so content questions in IPSL never surface with a wh-phrase (the object position in (18) and the subject position in (19) would be occupied by a silent phrase that is unselectively bound, following a proposal by Cheng
14. Sentence types
305
(1991)). Aboh and Pfau (2011) stress the theoretical implications of the IPSL pattern: even if wh-phrases typically participate in the meaning of questions cross-linguistically, IPSL would show that they are not necessary to type a content question as interrogative, since there are content questions with no wh-phrase. They discuss the consequence of this implication for the general theory of clause-typing. A complication with Aboh et al.’s (2005) account is that g-wh may (although it does not need to) combine with non-interrogative signs to express more specific meanings. This is illustrated in (20) and (21), in which the sign place is associated to g-wh to express the meaning ‘where’: (20)
index2 friend place sleep g-wh
[IPSL]
(21)
index2 friend sleep place g-wh ‘Where does your friend sleep?’
[IPSL]
As (20) and (21) indicate, the sign optionally associated with g-wh, namely place, may either appear at the right periphery, where it is adjacent to g-wh, or in situ. Since, under Aboh et al.’s (2005) account, place and g-wh do not form a constituent, deriving the word order in (21) is not straightforward. In fact, Aboh, Pfau, and Zeshan must assume that remnant movement applies within the clausal constituent which in turn moves to the left of the head that hosts g-wh. A rough simplification of this derivation is illustrated in (22). Presumably, a similar (complicated) derivation would be given to sign languages displaying interrogative phrases in the right periphery, should Aboh et al.’s (2005) account be extended to them. (22)
[ [ [index2 friend tz sleep]i placez ti ]j g-wh tj ]
[IPSL]
Summarizing, remnant movement analyses can explain the right placement of wh-items in sign languages and can reduce the gap with spoken languages, in which remnant movement analyses have been systematically exploited. A possible concern is that it is not always clear which features trigger the movement of the remnant. If movement of the remnant is not independently motivated, the remnant movement analysis can derive the correct word order but it runs the risk of being an ad hoc device.
3.5. Wh-duplication A feature that often surfaces in content questions in the sign languages analyzed in the literature is that the wh-sign may be duplicated. This phenomenon has been described in ASL, LSB, LIS, HZJ, ÖGS, and NGT (see references for these languages listed above) but has been reported, although less systematically, in other sign languages as well. Although cases of duplication of a wh-word are not unheard of in spoken languages (cf. Felser 2004), the scope of the phenomenon in sign languages seems much wider. From a theoretical point of view, it is tempting to analyze duplication of a wh category by adopting the copy theory of traces, proposed by Chomsky (1993) and much following work. This theory takes traces left by movement to be
306
III. Syntax perfect copies of the moved category, apart from the fact that (in a typical case) they are phonologically empty. Assuming the copy theory of traces, duplication is the null hypothesis and what must be explained is the absence of duplication, namely cancellation of one copy (typically, the lower one). Given their pervasive pattern of duplication, sign languages are a good testing ground for the copy theory of traces. Nunes’s (2004) theory on copy cancellation will be summarized, since it is extended by Nunes and de Quadros (2008) to cases of wh-duplication in sign languages (see also Cecchetto (2006) for a speculation on why copies are more easily spelled-out in sign languages than in spoken languages). Nunes (2004) claims that, in the normal case, only one copy can survive because, if two identical copies were present, the resulting structure could not be linearized under Kayne’s (1994) Linear Correspondence Axiom (LCA), which maps asymmetric c-command into linear precedence. This is so because LCA would be required to assign different positions to the ‘same’ element. For example, in a structure like (23), the subject ‘John’ would asymmetrically c-command and would be asymmetrically c-commanded by the same element, namely ‘what’. This would result in a contradiction, since ‘what’ should both precede and be preceded by ‘John’. Cancellation of the lower copy of ‘what’ fixes the problem. (23)
What did John buy what?
In Kayne’s framework, LCA is a condition determining word order inside the sentence, while LCA does not determine the order of morphemes inside the word. In other terms, LCA cannot see the internal structure of the word. Nunes and de Quadros capitalize on the word internal ‘blindness’ of LCA to explain wh-reduplication in LSB and ASL. They assume that multiple copies of the same category can survive only if one of these copies undergoes a process of morphological fusion with another word from which it becomes indistinguishable as far as LCA is concerned. More specifically, they claim that the duplicated sign becomes fused with the silent head of a focus projection. This explains why reduplication is a focus marking strategy. Since only a head can be fused with another head, Nunes and de Quadros can explain why phrases (including wh-phrases) can never be duplicated in LSB (and, possibly, in ASL as well). This approach naturally extends to other cases in which duplication is a focus marking device, namely lexical verbs, modals, etc.
3.6. Conclusion on content questions At the beginning of this section it was pointed out that content questions might be an area of macrotypological variation between spoken and sign languages. It is time to evaluate the plausibility of that hypothesis on the basis of the evidence that I presented and of other information present in the literature. Table 14.2 summarizes the information on the position of wh-signs in sign languages for which the literature reports enough data. For sign languages that have not already been mentioned, the source is indicated. Finally, Zeshan (2004), in a study that includes data from 35 different sign languages, claims that “across the sign languages in the data, the most common syntactic positions
14. Sentence types Tab. 14.2: Position of wh-signs in sign languages American Sign Language (ASL) Brazilian Sign Language (LSB) Wh-items may occur at the left periphery, at the right periphery and in situ. The extent to which these options are available in ASL remains controversial. Croatian Sign Language (HZJ), cf. Šarac and Wilbur (2006) Finnish Sign Language (FinSL), cf. Savolainen (2006) New Zealand Sign Language (NZSL), cf. McKee (2006) Wh-items can appear sentence-initially, sentence-finally or doubled in both positions. Australian Sign Language (Auslan), cf. Johnston and Schembri (2007) Wh-items can appear in situ, in sentence-initial position or doubled in sentence-initial and in sentence-final position. Austrian Sign Language (ÖGS), cf. Šarac et al. (2007) The most ’neutral’ position for wh-items is at the left edge. Israeli Sign Language (Israeli SL), cf. Meir (2004) Sign Language of the Netherlands (NGT), cf. Aboh and Pfau (2011) Catalan Sign Language (LSC), cf. Quer et al. (2005) Spanish Sign Language (LSE), cf. Herrero (2009) The natural position of wh-phrases is at the right edge. Japanese Sign Language (NS), cf. Morgan (2006) Wh-signs are typically, but not necessarily, clause-final. Wh-phrases can also occur in situ and on the left, in which case placement of a copy at the end of the sentence is not unusual. Hong Kong Sign Language (HKSL), cf. Tang (2006) The wh-signs for argument questions are either in situ or in clause-final position. Wh-signs for adjuncts are generally clause-final. Movement of the wh-sign in clause-initial position is not allowed. Italian Sign Language (LIS) Indo-Pakistani Sign Language (IPSL) Wh-phrases move to the right periphery, while movement to the left periphery is altogether banned.
for question words are clause-initial, clause-final, or both of these, that is, a construction with a doubling of the question word […]. In situ placement of question words occurs much less often across sign languages and may be subject to particular restrictions”. One should be very cautious when drawing a generalization from these data, since the set of sign languages for which the relevant information is available is still very restricted, not to mention the fact that much controversy remains even for better studied sign languages, such as ASL. However, it is clear that there are some languages (LIS, IPSL, and HKSL being the clearest cases and Israeli SL, LSC, LSE, NGT, and NS being other plausible candidates) in which the right periphery of the clause is the only natural position for wh-items. In other sign languages the pattern is more complicated, since other positions for wh-signs are available as well. Finally, in only one sign language in this group (ÖGS), the right periphery might not be accessible at all. Therefore, it seems that best guess based on the available knowledge is that the macrotypo-
307
308
III. Syntax logical variation between sign and spoken languages in the positioning of wh-items is real. This is not necessarily an argument in favor of the rightward movement analysis, since there are other possible explanations for the right peripheral position of whphrases, i.e. remnant movement accounts. Still, even if some form of the remnant movement proposals is right, it remains to be understood why remnant movement is more widespread in content questions in sign languages than in spoken languages. All in all, it seems fair to conclude that one argument originally used against the rightward movement analysis for ASL by Petronio and Lillo-Martin, namely that it would introduce a type of movement unattested in other languages, has been somewhat weakened by later research on other sign languages. There is another tentative generalization that future research should evaluate. Sign languages for which a formal account has been proposed seem to come in two main groups. On the one side, one finds languages like ASL, LSB, and HZJ. In these languages, both the left and the right periphery are accessed by the wh-sign, although the extent to which this can happen remains controversial (at least in ASL). On the other side, IPSL and LIS are clearly distinct, since wh-words are not allowed to sit in the left periphery under any condition (this is a pre-theoretical description; if remnant movement analyses are right, wh-phrases access the left periphery in LIS and IPSL as well). Interestingly, ASL, LSB, and HZJ are SVO, while IPSL and LIS are SOV. It has been proposed that the position of wh-phrases may be correlated to word order. In particular, Bach (1971), by having in mind leftward movement in spoken languages, claimed that wh-movement is confined to languages that are not inherently SOV. The status of Bach’s generalization is not entirely clear. An automatic search using the tools made available by the World Atlas of Language Structures Online reveals that, out of 497 languages listed as SOV, 52 display sentence-initial interrogatives (this search was made by combining “Feature 81: Order of Subject, Object and Verb” (Dryer 2009c) and “Feature 93: Position of Interrogative Phrases in Content Questions” (Dryer 2009b)). However, Bach’s generalization is taken for granted in much theoretically oriented work (for example, Kayne (1994) tries to capture it in his antisymmetric framework) and it is rather clear that it holds for better-studied SOV languages (Basque, Japanese, Turkish, or Hindi, among others). Assuming that Bach’s generalization is on the right track, it should be qualified once sign languages enter into the picture. The qualified generalization would state that in both sign and spoken languages wh-phrases can access the left periphery only if the language is not SOV. However, while wh-phrases remain in situ in SOV spoken languages, they can surface in the right periphery in SOV sign languages. It should be stressed that at present this is a very tentative generalization and only further crosslinguistic research on sign (and spoken) languages can confirm or reject it.
4. Other constructions with wh-phrases In spoken languages, wh-phrases are found in constructions distinct from content questions. These include full relative clauses, free relatives, exclamatives, rhetorical questions, and pseudoclefts. It is interesting to ask whether the occurrence of wh-movement is also observed in the correspondent constructions in sign languages. This issue is
14. Sentence types
309
relevant for the debate concerning the role of wh-phrases in content questions (cf. Aboh and Pfau’s (2011) claim, based on IPSL, that wh-phrases, being not inherently interrogative, are not the crucial factor that makes a sentence interrogative). The first observation is that in no known sign language are (full) relative clauses formed by wh-movement, notwithstanding the fact that relative constructions in sign languages replicate all the major strategies of relativization identified in spoken languages, namely internally headed relatives, externally headed relatives, and correlatives. Detailed descriptions of relative constructions are available for three sign languages: ASL, LIS, and DGS. LIS relative constructions have been analyzed as either internally headed relatives (Branchini 2006; Branchini/Donati 2009) or as correlatives (Cecchetto et al. 2006). Pfau and Steinbach (2005) claim that DGS displays externally headed relative clauses. According to Liddell (1978, 1980), in ASL both internally and externally headed relative clauses can be identified (cf. Wilbur/Patschke (1999) for further discussion on ASL relatives; also see chapter 16, Complex Sentences, for discussion of relative clauses). Interestingly, although relative markers have been identified in all these languages, they are morphologically derived from demonstrative or personal pronouns, not from wh-signs. The lack of use of wh-items in full relative clauses (if confirmed for other sign languages) is an issue that deserves further analysis. A related question is whether wh-NMM, intended as the non-manual marking normally found in content questions, is intrinsically associated with wh-signs. The answer to this question must be negative, since it is clear that there are various constructions in which wh-signs occur with a NMM different from wh-NMM. We already mentioned structures like (7) above, which are analyzed as indirect questions by Petronio and Lillo-Martin (1997) and do not display the wh-NMM normally found in ASL. However, the better studied case of a wh-construction occurring without wh-NMM is the ASL construction illustrated in (25) (Branchini (2006) notes a similar construction in LIS): re
(25)
john buy what, book ‘The thing/What John bought is a book.’
[ASL]
Superficially the construction in (25) resembles a question-answer pair at the discourse level, but there is evidence that it must be analyzed as a sentential unit. The first obvious observation is that, if the sequence john buy what were an independent question, we would expect the canonical wh-NMM to occur. However, eyebrow raise (instead of furrowing) occurs. Davidson et al. (2008, in press) discuss further evidence that structures like (25) are declarative sentences. For example, they show that these structures can be embedded under predicates which take declarative clauses as complements (hope, think, or be-afraid), but not under predicates that take interrogative clauses as complements, such as ask (see also Wilbur 1994): re
(26)
those girls hope [their father buy what, car] ‘Those girl hope that the thing/what their father bought is a car.’
(27)
*those girls ask [their father buy what, car]
[ASL]
310
III. Syntax A natural analysis takes the ASL sentence (25) to be the counterpart of the English pseudocleft sentence ‘What John bought is a book’ (cf. Petronio (1991) and Wilbur (1996) for this type of account). Under this analysis, the wh-constituent in (25) would be taken to be a free relative (but see Ross (1972), den Dikken et al. (2000), Schlenker (2003) for analyses that reject the idea that a pseudocleft contains a free relative). However, Davidson et al. (2008, in press) object to a pseudocleft analysis, based on various facts, including the observation that, unlike free relatives in English, any whword (who, where, why, which, etc.) can appear in structures like (25). As a result, they conclude that the wh-constituent in (25) is an embedded question, not a free relative. The proper characterization of the wh-constituent in sentences like (25) bears on the controversy concerning the position of wh-items in ASL, since there seems to be a consensus that, at least in this construction, wh-items must be clause-final. So, if the wh-constituent in (25) were a question, it would be an undisputed case of a question in which the wh-item must be right peripheral. One question that arises is what can explain the distribution of wh-NMM, since it is clear that wh-items are not intrinsically equipped with it. There is consensus that the distribution of wh-NMM is largely determined by syntactic factors, although different authors may disagree on the specifics of their proposal (NKMBL and Wilbur and Patschke 1999 claim that wh-NMM is a manifestation of the wh feature in COMP, Petronio and Lillo-Martin (1997) argue that wh-NMM expresses the combination of wh and Focus features in COMP, and CGZ claim that wh-NMM marks the wh-dependency). However, it has been proposed that non-syntactic factors play an important role as well. For example, Sandler and Lillo-Martin (2006), reporting work in Hebrew by Meir and Sandler (2004), remark that the facial expression associated with content questions in Israeli SL (furrowed brow) is replaced by a different expression if the question does not require an answer but involves reproach (like in the Israeli SL version of the question “Why did you just walk out of my store with that shirt without paying?”). Sandler and Lillo-Martin conclude that the pragmatic condition of a content question is crucial to determine the type of NMM that surfaces: when the speaker desires an answer involving content, wh-NMM is typically used, but when the information being questioned is already known, wh-NMM is replaced with a different expression. Since it is commonly assumed that wh-NMM has the characteristics of a prosodic element (intonation), it is not surprising that prosodic considerations play a role in its distribution. In particular, Sandler and Lillo-Martin discuss some cases in which wh-NMM is determined by Intonation Phrasing (for example, if a parenthetical interrupts a wh-question, wh-NMM stops being articulated over the parenthetical and is reenacted over the portion of the clause that follows it). All in all, wh-NMM is a phenomenon at the interface between syntax and phonology with important consequences for the pragmatic uses of content questions. Whereas its syntactic role is not in discussion, only a combined account can explain its precise distribution.
5. Conclusion Results emerging from the research on questions in sign languages have proved important both for linguists interested in formal accounts and for those interested in language
14. Sentence types typology. On the one hand, some well established cross-linguistic generalizations about the position of interrogative elements in content questions need some revision or qualification once sign languages are considered. On the other, pieces of the formal apparatus of analysis, like the position of specifiers in the syntactic structure, the notion of chain and that of copy/trace, may need refining, since the sign language pattern is partially different from that emerging from spoken languages. Thus, the formal theory of grammar may be considerably enriched and modified by the study of sign languages. The opposite holds as well, however. The pattern observed with sign languages is so rich and complex that no adequate description could be reached without a set of elaborate working hypotheses that can guide the research. Eventually, these working hypotheses can be revised or even rejected, but they are crucial in order to orientate the research. It is unfortunate that the same virtuous interaction between empirical observation and theoretical approaches has not been observed in the study of other sentence types. In particular, a deep investigation of imperatives (and exclamatives) in sign languages is still to be done and one must hope that this gap will soon be filled.
6. Literature Aboh, Enoch/Pfau, Roland/Zeshan, Ulrike 2005 When a Wh-Word Is Not a Wh-Word: The Case of Indian Sign Language. In: Bhattacharya, Tanmoy (ed.), The Yearbook of South Asian Languages and Linguistics 2005. Berlin: Mouton de Gruyter, 11⫺43. Aboh, Enoch/Pfau, Roland 2011 What’s a Wh-Word Got to Do with It? In: Benincà, Paola/Munaro, Nicola (eds.), Mapping the Left Periphery: The Cartography of Syntactic Structures, Vol. 5. Oxford: Oxford University Press, 91⫺124. Bach, Emmon 1971 Questions. In: Linguistic Inquiry 2, 153⫺166. Baker, Charlotte/Cokely, Dennis 1980 American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Silver Spring, MD: T.J. Publishers. Barattieri, Chiara 2006 Il periodo ipotetico nella Lingua dei Segni Italiana (LIS). MA Thesis, University of Siena. Branchini, Chiara 2006 On Relativization and Clefting in Italian Sign Language (LIS). PhD Dissertation, University of Urbino. Branchini, Chiara/Donati, Caterina 2009 Relatively Different: Italian Sign Language Relative Clauses in a Typological Perspective. In: Liptàk, Anikó (ed.), Correlatives Cross-Linguistically. Amsterdam: Benjamins, 157⫺194. Cecchetto, Carlo 2006a Reconstruction in Relative Clauses and the Copy Theory of Traces. In: Pica, Pierre/ Rooryck, Johan (eds.), Linguistic Variation Yearbook 5. Amsterdam: Benjamins, 73⫺ 103. Cecchetto, Carlo/Geraci, Carlo/Zucchi, Sandro 2006 Strategies of Relativization in Italian Sign Language. In: Natural Language and Linguistic Theory 24, 945⫺975.
311
312
III. Syntax Cecchetto, Carlo/Geraci, Carlo/Zucchi, Sandro 2009 Another Way to Mark Syntactic Dependencies. The Case for Right Peripheral Specifiers in Sign Languages. In: Language 85(2), 278⫺320. Cheng, Lisa 1991 On the Typology of Wh-questions. PhD Dissertation, MIT. Chomsky, Noam 1993 A Minimalist Program for Linguistic Theory. In: Hale, Kenneth/Keyser, Samuel Jay (eds.), The View from Building 20. Cambridge, MA: MIT Press, 1⫺52. Coerts, Jane 1992 Nonmanual Grammatical Markers. An Analysis of Interrogatives, Negations and Topicalisations in Sign Language of the Netherlands. PhD Dissertation, University of Amsterdam. Coulter, Geoffrey R. 1979 American Sign Language Typology. PhD Dissertation, University of California, San Diego. Davidson, Kathryn/Caponigro, Ivano/Mayberry, Rachel 2008 Clausal Question-answer Pairs: Evidence from ASL. In: Abner, Natasha/Bishop, Jason (eds.), Proceedings of the 27 th West Coast Conference on Formal Linguistics. Somerville, MA: Cascadilla Press, 108⫺115. Davidson, Kathryn/Caponigro, Ivano/Mayberry, Rachel in press The Semantics and Pragmatics of Clausal Question-Answer Pairs in American Sign Language. To appear in Proceedings of SALT XVIII. Deguchi, Masanori/Kitagawa, Yoshihisa 2002 Prosody and Wh-Questions. In: Hirotani, Masako (ed.), Proceedings of the Thirty-Second Annual Meeting of the North East Linguistic Society. Amherst, MA: GLSA, 73⫺92. Dikken, Marcel den/Meinunger, André/Wilder, Chris 2000 Pseudoclefts and Ellipses. In: Studia Linguistica 54, 41⫺89. Dryer, Matthew S. 2009a Polar Questions. In: Haspelmath, Martin/Dryer, Matthew S./Gil, David/Comrie, Bernard (eds.), The World Atlas of Language Structures Online. Munich: Max Planck Digital Library, Chapter 116. [http://wals.info/feature/116] Dryer, Matthew S. 2009b Position of Interrogative Phrases in Content Questions. In: Haspelmath, Martin/Dryer, Matthew S./Gil, David/Comrie, Bernard (eds.), The World Atlas of Language Structures Online. Munich: Max Planck Digital Library, Chapter 92. [http://wals.info/feature/92] Dryer, Matthew S. 2009c Order of Subject, Object and Verb. In: Haspelmath, Martin/Dryer, Matthew S./Gil, David/Comrie, Bernard (eds.), The World Atlas of Language Structures Online. Munich: Max Planck Digital Library, Chapter 81. [http://wals.info/feature/81] Dubuisson, Colette/Boulanger, Johanne/Desrosiers, Jules/Lelièvre Linda 1991 Les mouvements de tête dans les interrogatives en langue des signes québécoise. In: Revue québécoise de linguistique 20(2), 93⫺122. Dubuisson, Colette/Miller, Christopher/Pinsonneault, Dominiqu 1994 Question Sign Position in LSQ (Québec Sign Language). In: Ahlgren, Inger/Bergman, Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure. Papers from the Fifth International Symposium on Sign Language Research (Vol. 1). Durham: International Sign Linguistics Association and Deaf Studies Research Unit, University of Durham, 89⫺104. Felser, Claudia 2004 Wh-copying, Phases and Successive Cyclicity. In: Lingua 114, 543⫺574. Ferreira-Brito, Lucinda 1995 Por uma gramática das línguas de sinais. Rio de Janeiro: Tempo Brasileiro, UFRJ.
14. Sentence types Fischer, Susan D. 1975 Influences on Word-order Change in American Sign Language. In: Li, Charles (ed.), Word Order and Word Order Change. Austin: University of Texas Press, 1⫺25. Geraci, Carlo 2006 Negation in LIS. In: Bateman, Leah/Ussery, Cherlon (eds.), Proceedings of the ThirtyFifth Annual Meeting of the North East Linguistic Society, Vol. 2. Amherst, MA: GLSA, 217⫺230. Herrero, Ángel 2009 Gramática didáctica de la lengua de signos española. Madrid: Ediciones SM-CNSE. Ishihara, Shinichiro 2002 Invisible but Audible Wh-Scope Marking: Wh-Constructions and Deaccenting in Japanese. In: Mikkelsen, Line/Potts, Christopher (eds.), Proceedings of the 21st West Coast Conference on Formal Linguistics (WCCFL 21). Somerville, MA: Cascadilla Press, 180⫺193. Johnston, Trevor/Schembri, Adam 2007 Australian Sign Language: An Introduction to Australian Sign Language Linguistics. Cambridge: Cambridge University Press. Kayne, Richard 1994 The Antisymmetry of Syntax. Cambridge, MA: MIT Press. Kayne, Richard 1998 Overt vs. Covert Movement. In: Syntax 1(2), 128⫺191. Liddell, Scott K. 1978 Nonmanual Signals and Relative Clauses in American Sign Language. In: Siple, Patricia (ed.), Understanding Language Through Sign Language Research. New York: Academic Press, 59⫺90. Liddell, Scott K. 1980 American Sign Language Syntax. The Hague: Mouton. Lillo-Martin, Diane/Fischer, Susan D. 1992 Overt and Covert Wh-Questions in American Sign Language. Paper Presented at the Fifth International Symposium on Sign Language Research, Salamanca, Spain. MacFarlane, James 1998 From Affect to Grammar: Ritualization of Facial Affect in Signed Languages. Paper Presented at the Theoretical Issues in Sign Language Research Conference (TISLR), Gallaudet University. [http://www.unm.edu/~ jmacfarl /eyebrow.html] McKee, Rachel 2006 Aspects of Interrogatives and Negation in New Zealand Sign Language. In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 70⫺90. Meir, Irit 2004 Question and Negation in Israeli Sign Language. In: Sign Language & Linguistics 7, 97⫺124. Meir, Irit/Sandler, Wendy 2004 Safa bamerxav: Eshnav le- sfat hasimanim hayisraelit (Language in Space: A Window on Israeli Sign Language). Haifa: University of Haifa Press. Morgan, Michael 2006 Interrogatives and Negatives in Japanese Sign Language (JSL). In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 91⫺127. Neidle, Carol/MacLaughlin, Dawn/Lee, Robert/Bahan, Benjamin/Kegl, Judy 1998 Wh-Questions in ASL: A Case for Rightward Movement. American Sign Language Linguistic Research Project Reports, Report 6. [http://www.bu.edu/asllrp/reports.html]
313
314
III. Syntax Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert 2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Structure. Cambridge, MA: MIT Press. Neidle, Carol 2002 Language Across Modalities: ASL Focus and Question Constructions. In: Pica, Pierre/ Rooryck, Johan (eds.), Linguistic Variation Yearbook 2. Amsterdam: Benjamins, 71⫺93. Nunes, Jairo 2004 Linearization of Chains and Sideward Movement. Cambridge, MA: MIT Press. Nunes, Jairo/Quadros, Ronice M. de 2008 Phonetically Realized Traces in American Sign Language and Brazilian Sign Language. In: Quer, Josep (ed.), Signs of the Time, Selected Papers from TISLR 2004. Hamburg: Signum, 177⫺190. Petronio, Karen 1991 A Focus Position in ASL. In: Bobaljik, Jonathan D./Bures, Tony (eds.), Papers from the Third Student Conference in Linguistics. (MIT Working Papers in Linguistics 14.) Cambridge, MA: MIT, 211⫺225. Petronio, Karen/Lillo-Martin, Diane 1997 Wh-Movement and the Position of Spec-CP: Evidence from American Sign Language. In: Language 73, 18⫺57. Pfau, Roland/Steinbach, Markus 2005 Relative Clauses in German Sign Language: Extraposition and Reconstruction. In: Bateman, Leah/Ussery, Cherlon (eds.), Proceedings of the Thirty-Fifth Annual Meeting of the North East Linguistic Society, Vol. 2. Amherst, MA: GLSA, 507⫺521. Poletto, Cecila/Pollock, Jean Yves 2004 On the Left Periphery of Some Romance Wh-questions. In: Rizzi, Luigi (ed.), The Structure of CP and IP: The Cartography of Syntactic Structures. Oxford: Oxford University Press, 251⫺296. Quadros, Ronice M. de 1999 Phrase Structure of Brazilian Sign Language. PhD Dissertation, Pontifícia Universidade Católica, Rio Grande do Sul. Quadros, Ronice M. de 2006 Questions in Brazilian Sign Language (LSB). In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 270⫺283. Quer, Josep et al. 2005 Gramàtica bàsica LSC. Barcelona: DOMAD-FESOCA. Richards, Norvin 2006 Beyond Strength and Weakness. Manuscript, MIT. Rizzi, Luigi 1990 Relativized Minimality. Cambridge, MA: MIT Press. Ross, John R. 1972 Act. In: Davidson, Donald/Harman, Gilbert (eds.), Semantics of Natural Languages. Dordrecht: Reidel, 70⫺126. Sadock, Jerrold M./Zwicky, Arnold M. 1985 Speech Act Distinctions in Syntax. In: Shopen, Timothy (ed.), Language Typology and Syntactic Description. Cambridge: Cambridge University Press, 155⫺196. Sandler, Wendy 1989 Prosody in Two Natural Language Modalities. In: Language and Speech 42, 127⫺142. Sandler, Wendy/Lillo-Martin, Diane 2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press. Schlenker, Philippe 2003 Clausal Equations (A Note on the Connectivity Problem). In: Natural Language and Linguistic Theory 21, 157⫺214.
14. Sentence types
315
Šarac Kuhn, Ninoslava/Wilbur, Ronnie 2006 Interrogative Structures in Croatian Sign Language: Polar and Content Questions. In: Sign Language & Linguistics 9, 151⫺167. Šarac, Ninoslava/Schalber, Katharina/Alibašić, Tamara/Wilbur, Ronnie 2007 Crosslinguistic Comparison of Interrogatives in Croatian Austrian and American Sign Languages. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 207⫺244. Savolainen, Leena 2006 Interrogatives and Negatives in Finnish Sign Language: An Overview. In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 284⫺302. Spolaore, Chiara 2006 Italiano e Lingua dei Segni Italiana a confronto: l’imperativo. MA Thesis, University of Venice. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge University Press. Tang, Gladys 2006 Questions and Negation in Hong Kong Sign Language. In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 198⫺ 224. Van Herreweghe, Mieke/Vermeerbergen, Myriam 2006 Interrogatives and Negatives in Flemish Sign Language. In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 225⫺ 257. Wilbur, Ronnie 1994 Foregrounding Structures in American Sign Language. In: Journal of Pragmatics 22, 647⫺672. Wilbur, Ronnie 1996 Evidence for the Function and Structure of Wh-Clefts in American Sign Language. In: Edmondson, William/Wilbur, Ronnie (eds.), International Review of Sign Linguistics. Hillsdale, NJ: Lawrence Erlbaum Associates, 209⫺256. Wilbur, Ronnie/Patschke, Cynthia 1999 Syntactic Correlates of Brow Raise in ASL. In: Sign Language & Linguistics 2(3), 3⫺41. Zanuttini, Raffaella/Portner, Paul 2003 Exclamative Clauses: At the Syntax-semantics Interface. In: Language 79(3), 39⫺81. Zeshan, Ulrike 2003 Indo-Pakistani Sign Language Grammar: A Typological Outline. In: Sign Language Studies 3, 157⫺212. Zeshan, Ulrike 2004 Interrogative Constructions in Sign Languages ⫺ Cross-linguistic Perspectives. In: Language 80, 7⫺39. Zeshan, Ulrike 2006 Negative and Interrogatives Structures in Turkish Sign Language (TID). In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 128⫺164.
Carlo Cecchetto, Milan (Italy)
316
III. Syntax
15. Negation 1. 2. 3. 4. 5. 6. 7.
Introduction Manual negation vs. non-manual marking of negation Syntactic patterns of negation Negative concord Lexical negation and morphological idiosyncrasies of negatives Concluding remarks Literature
Abstract The expression of sentential negation in sign languages features many of the morphological and syntactic properties attested for spoken languages. However, non-manual markers of negation such as headshake or facial expression have been shown to play a central role in this type of languages and they interact in various interesting ways with manual negatives and with syntactic structure of negative clauses, thus introducing modalityspecific features. Particular sign language grammars are parametrized as to whether sentential negation can be encoded solely with a manual or a non-manual element, or with both. Multiple expression of negation at the manual level is another point of variation. Pending further detailed descriptions and syntactic analyses of negation in a larger pool of sign languages, it can be safely concluded that negation systems in the visual-gestural modality show the richness and complexities attested for natural languages in general.
1. Introduction Within the still limited body of research on the grammar of sign languages, the expression of negation is one of the few phenomena that has received a considerable amount of attention. Apart from quite a number of descriptions and analyses of negative structures in individual sign languages, negation has been the object of a crosslinguistic project which investigated selected aspects of the grammar of a wide sample of sign languages from a typological perspective (Zeshan 2004, 2006a,b). A reason for the special attention devoted to the grammar of negation might lie in the fact that it constitutes a domain of grammar where manual and non-manual elements interact in very rich and intricate ways: beyond the superficial first impression that all sign languages negate by resorting to similar mechanisms, their negation systems display remarkably diverse constraints that interact in complex ways with the different components of each individual grammar. The main manual and non-manual ingredients of linguistic negation can be traced back to affective and conventionalized gestures of the hearing community the languages are embedded in, and it is precisely for this reason that the results of the research carried out in this domain provide strong evidence for the linguistic properties that recruited those gestures and integrated them into sophisticated linguistic systems. At the same time, the origin of many negative markers reinforces the hypothesis that contemporary sign languages, as a consequence of their relative
15. Negation youth and the medium in which they are articulated and perceived, systematically feature gestural and spatial resources which have been available during their genesis period and subsequent (re)creolization phases. Looking into the properties of sign language negation systems is motivated by the need to offer a more accurate characterization of the role of the different non-manual markers that are used. It has been argued that non-manuals play different roles at each linguistic level (lexical marking, morphology, syntax, prosody; for an overview, see Pfau/Quer 2010), and detailed analyses of negatives in different languages strongly suggest that non-manual markers can be recruited for different functions at different levels across languages. This result is of utmost importance in order to tease apart linguistic vs. gestural non-manuals, which systematically co-occur within the same medium in visual-gestural languages. This chapter offers an overview of the most representative traits of the sentential negation systems of the sign languages reported upon so far and highlights general tendencies as well as interesting language-specific particularities. As Zeshan (2004) points out, it might be too early to offer comprehensive typological analyses of sign languages, given the insufficient number of studied sign languages for statistical analysis as well as their unbalanced geographical distribution. Still, crosslinguistic comparison already yields quite a robust picture of the existing variation and it also allows for analyzing the attested variation against the background of spoken language negation. At the same time, theoretical syntax can also benefit from in-depth analyses of sign language negation systems, as they constitute the testing ground for existing accounts of the syntactic representation of functional elements. The focus of section 2 is on the main types of manual and non-manual components of negation. The form of manual sentence negators is reviewed and regular and irregular negative signs are characterized. Next, the different head movements and facial expressions that encode negation are described. Section 3 focuses on certain syntactic properties attested in sign language negation: the interaction with other syntactic categories, manual negation doubling, and spreading of non-manual markers. Section 4 addresses the multiple expression of negation in patterns of split negation and negative concord from a syntactic point of view. In section 5, nonsentential manual negation is discussed, together with some morphological idiosyncrasies associated with it.
2. Manual negation vs. non-manual marking of negation For almost all sign languages described to date, sentential negation has been found to rely on two basic components: manual signs that encode negative meanings ranging from the basic negative operator to very specific ones, as well as different types of non-manual markers that can be either co-articulated with manual negative signs or, in some cases, with other lexical signs in order to convey negation on their own. With respect to these two components, we find a first parameter of crosslinguistic variation: while some sign languages appear to be able to encode sentential negation by means of a non-manual marker alone which is obligatory (e.g., American Sign Language (ASL), German Sign Language (DGS), and Catalan
317
318
III. Syntax Sign Language (LSC)), in other languages, the presence of a non-manual marker is insufficient to negate the sentence and thus, a manual negator is required for that function (e.g., Italian Sign Language (LIS), Jordanian Sign Language (LIU), and Turkish Sign Language (TİD)). Zeshan (2006b, 46) labels languages of the former type “non-manual dominant” and languages of the latter type “manual dominant” languages. On the basis of her language sample, she establishes that non-manual dominant languages are a majority. In (1) and (2), examples of the two types of language with respect to this parameter illustrate the combinatorial possibilities of headshake with and without manual negation in LSC (Quer 2007) and LIS (Geraci 2005), which are non-manual dominant and manual dominant, respectively. (
(
)) hs
(1)
a. santi meat eat not ( ) hs b. santi meat eat ‘Santi doesn’t eat meat.’
(2)
a. paolo contract sign non ‘Paolo didn’t sign the contract.’ ( ( ( ))) b. * paolo contract sign
[LSC]
hs
[LIS]
As we observe in the LIS example in (2), it is not the case that in manual dominant languages non-manual markings are totally absent. Rather, they are generally coarticulated with the manual negation and tend not to spread over other manual material. When there are several negative markers, the choice of non-manual is usually determined by the lexical negation, unlike what happens in non-manual dominant languages. It is important to notice that the function of non-manual marking of negation in non-manual dominant languages is almost exclusively to convey sentential negation (although see section 2.2 for some data that qualify this generalization). This is in contrast to manual negations, which often include more specific signs encoding negation and some other functional category such as aspect or modality, or a portmanteau sign conveying the negation of existence.
2.1. Manual negation 2.1.1. Negative particles Standard sentential negation is realized in many sign languages by a manual sign that simply negates the truth of the proposition, such as the one found in LIU and LSC consisting of an index handshape with the palm facing outwards and slightly moving from side to side, as illustrated in Figure 15.1 and exemplified in (3) for LIU (Hendriks 2007, 107).
15. Negation
319
Fig. 15.1: Neutral sentential negation neg in LIU. Copyright © 2007 by Bernadet Hendriks. Reprinted with permission.
(3)
father mother deaf index1 neg // speak ‘My father and mother aren’t Deaf, they speak.’
[LIU]
However, basic sentential negation can occasionally carry an extra layer of pragmatic meaning: in a few instances, sentential negation signs have been claimed to convey some presupposition, as neg-contr in Indo-Pakistani Sign Language (IPSL) (Zeshan 2004, 34 f.) or no-no in TİD (Zeshan 2006c, 154 f.). In such cases, the negative particle explicitly counters a conversational presupposition, which may be implicit, as in (4), or explicit in the preceding discourse, as in (5). (4)
problem neg-contr [IPSL] ‘There is no problem (contrary to what has been said/what is usually assumed/what you may beexpecting).’
(5)
village good / city neg-contr ‘Villages are nice, but cities are not.’
[IPSL]
A further nuance that is often added to basic sentential negation is emphasis, normally expressed through dedicated non-manual markings accompanying the manual negator, as reported in McKee (2006, 82) for New Zealand Sign Language (NZSL). Nevertheless, some languages have specialized manual negations that have been characterized as emphatic, with the meaning ‘not at all’ or ‘absolutely not’. An example of this is the Finnish Sign Language (FinSL) sign no (‘absolutely not’) illustrated in (6) (Savolainen 2006, 296). re
(6)
head turnCback/neg mouthing/squint
index1 come no ‘I’m definitely not coming!’
[FinSL]
As we will see in section 4, doubling of a negative sign or negative concord results in emphasis on the negation as well.
320
III. Syntax As syntactic markers of negation, negative signs occasionally display features that are normally relevant in other domains of syntax. One such example might be what has been characterized as person inflection for the NZSL sign nothing, which can be articulated in locations associated with person features (McKee 2006, 85).
2.1.2. Irregular negatives Manual sentential negation across sign languages usually features a number of lexical signs that incorporate negation either in a transparent way or opaquely in suppletive forms. Both types are usually referred to as instances of negation incorporation. Zeshan (2004) calls this group of items irregular negatives and points out that sign languages tend to display some such items crosslinguistically. The majority of such signs belong to recognizable semantic classes of predicates such as those expressing cognition (‘know’, ‘understand’), emotion or volition (‘like’, ‘want’), a modal meaning (‘can’, ‘need’, ‘must’), or possession/existence (‘have’, ‘there-be’). See, for instance, the minimal LSC pair can vs. cannot in Figure 15.2.
can
cannot
Fig. 15.2: LSC pair can vs. cannot
In addition, evaluative predicates (‘good’, ‘enough’) and grammatical tense/aspect notions such as perfect or future tend to amalgamate with negation lexically as well, as in the Hong Kong Sign Language (HKSL) negated future sign won’t and the negated perfect sign not-yet (Tang 2006, 219): neg
(7)
kenny february fly taiwan won’t ‘Kenny won’t fly to Taiwan in February.’
(8)
(kenny) participate research not-yet ‘Kenny has not yet participated in the research.’
[HKSL]
neg
[HKSL]
15. Negation
321
Some items belonging in this category can also have an emphatic nuance, as neverpast or never-future in Israeli Sign Language (Israeli SL, Meir 2004, 110). Among the set of irregular negative signs, two different types can be distinguished from the point of view of morphology: on the one hand, transparent forms where the negative has been concatenated or cliticized onto a lexical sign or else a negative morpheme (simultaneous or sequential) has been added to the root (Zeshan 2004, 45⫺49); on the other hand, suppletive negatives, that is, totally opaque negative counterparts of existing non-negated signs. An example of the latter group has been illustrated in Figure 15.2 (above) for LSC. In the case of negative cliticization, a negative sign existing independently is concatenated with another sign but the resulting form remains recognizable and both signs retain their underlying movement, albeit more compressed, and no handshape assimilation occurs. The interpretation of both signs together is fully compositional. An illustration of such a case is shown in Figure 15.3 for TİD, where the cliticized form of not can be compared to the non-cliticized one (Zeshan 2004, 46).
a. know^not
b. not
Fig. 15.3: TİD cliticized (a) vs. non-cliticized (b) negation. Copyright © 2004 by Ulrike Zeshan. Reprinted with permission.
a. need
b. need-not
Fig. 15.4: Irregular simultaneous affixal negation with the verb need in FinSL. Copyright © 2004 by Ulrike Zeshan. Reprinted with permission.
322
III. Syntax The other process found in the formation of irregular negatives is affixation, which can be simultaneous or sequential. An illustrative case of simultaneous negative affixation found in FinSL (Zeshan 2004, 47⫺49; Savolainen 2006, 299⫺301) is illustrated in Figure 15.4: the affix consists of a change in palm orientation that, depending on the root it combines with, can result in a negative with a horizontal upwards or vertical inwards oriented open handshape. The simultaneous morpheme does not have an independent movement and in some cases it assimilates its handshape to the one of the root (e.g., see-not). In addition, the resulting negative sign can have a more specific or idiosyncratic meaning (e.g., perfective/resultative in see-not with the interpretation ‘have not seen, did not see’; hear-not meaning ‘have not heard, do not know’). The negative morpheme has no free-occurring counterpart and combines with a restricted set of lexical items, thus displaying limited productivity. Occasionally, affixation involves a specific handshape, as the extended pinky handshape in HKSL (see section 5 for further discussion). This handshape is derived from a sign meaning bad/wrong and is affixed to certain items, giving rise to negative signs such as know-bad (‘don’t know’) or understand-bad (‘don’t understand’), as illustrated in Figure 15.5 (Tang 2006, 223).
know know-bad Fig. 15.5: Irregular simultaneous affixal negation by means of handshape with the verb know in HKSL. Copyright © 2006 by Ishara Press. Reprinted with permission.
Sequential affixation has been shown most clearly to be at stake in the ASL suffix ^zero, which is formationally related to the sign nothing. Aronoff et al. (2005, 328⫺330) point out that ^zero shows the selectivity and behavior typical of morphological affixes: it only combines with one-handed plain verbs; the path movements get compressed or coalesce; the non-manuals span the two constituent parts of the sign; no handshape assimilation occurs; and some of the negative forms yield particular meanings. These phenomena clearly distinguish this process from compound formation. A similar derivational process is observed in Israeli SL, where the relevant suffix ^not-exist can give rise to negative predicates with idiosyncratic
15. Negation
323
meanings such as surprise+not-exist (‘doesn’t interest me at all’) or enthusiasm+ not-exist (‘doesn’t care about it’) (Meir 2004, 116; for more details, see section 5 below). When an irregular negative is available in the language, it normally blocks the option of combining the non-negative predicate with an independent manual negator or with a non-manual marker, if this non-manual can convey sentential negation on its own. Compare the ungrammatical LSC example (9) with Figure 15.2 above, which shows the suppletive form cannot: (
(9)
)
hs
* can (not)
[LSC]
Nevertheless, this is not always the case and sometimes both options co-exist, as reported for LIS in Geraci (2005).
2.1.3. Negation in the nominal and adverbial domain Apart from negative marking related to the predicate, negation is often encoded in the nominal domain and in adverbials as well. Negative determiners glossed as no and negative quantifiers (pronouns, in some descriptions) such as none, nothing, or no one occur in many of the sign languages for which a description of the negation system exists. The LIS example in (10) illustrates the use of a nominal negative (Geraci 2005): hs
(10)
contract sign nobody ‘Nobody signed the contract.’
[LIS]
Two distinct negative determiners have been identified for ASL: nothing and noº, illustrated in (11) (Wood 1999, 40). (11)
john break fan nothing/noº ‘John did not break any (part of the) fan.’
[ASL]
Negative adverbials such as never are also very common. For ASL, Wood (1999) has argued that different interpretations result from different syntactic positions of never: when preverbal, it negates the perfect (12a), while in postverbal position, it yields a negative modal reading (12b). (12)
a. bob never eat fish ‘Bob has never eaten fish.’ b. bob eat fish never ‘Bob won’t eat fish.’
[ASL]
324
III. Syntax
Fig. 15.6: LSC negative imperative sign don’t!
2.1.4. Other pragmatically specialized negators Beyond the strict domain of sentential negation, other occurrences of negatives must be mentioned. Negative imperatives (or prohibitives) are usually expressed non-manually in combination with a more general negative particle, but some languages have a specialized sign for this type of speech act, as in the LSC negative imperative shown in Figure 15.6 (Quer/Boldú 2006). In the domain of pragmatically specialized negations, a whole range of signs is found, including negative responses, refusal of an offer, or denial. Some of these signs constitute one-word utterances and have been classified as interjections. Metalinguistic negation can be included in this set of special uses. Although it is not common, Japanese Sign Language (NS) has a specialized manual negation to refute a specific aspect of a previous utterance: differ (Morgan 2006, 114).
2.2. Non-manual markers of negation It has already pointed out in the beginning of this section that negation is not only realized at the manual level, but also at the non-manual one, and that languages vary as to how these two types of markers combine and to what extent they are able to convey sentential negation independently of each other (for non-manual markers, cf. also chapter 4, Prosody). It seems clear that such markers have their origin in gestures and facial expressions that occur in association with negative meanings in human interaction. In sign languages, however, these markers have evolved into fully grammaticalized elements constrained by language-specific grammatical rules (see Pfau/Steinbach 2006 and chapter 34). Beyond the actual restrictions to be discussed below, especially in section 3, there is psycholinguistic and neurolinguistic evidence indicating that non-manuals typically show linguistic patterns in acquisition and processing and can be clearly distinguished from affective communicative behavior (Reilly/Anderson 2002; Corina/Bellugi/Reilly 1999; Atkin-
15. Negation
325
son et al. 2004). Moreover, unlike gestures, linguistic non-manuals in production have a discrete onset and offset, are constant, and have a clear and linguistically defined scope (Baker-Shenk 1983).
2.2.1. Head movements The main non-manual markers of negation involve some sort of head movement. The most pervasive one is headshake, a side-to-side movement of the head which is found in virtually all sign languages studied to date (Zeshan 2004, 11). The headshake normally associates with the negative sign, if present, but it commonly spreads over other constituents in the clause. The spreading of negative headshake is determined by language-specific grammar constraints, as will be discussed in section 3. In principle, it must be co-articulated with manual material, but some cases of freestanding headshake have been described, for instance, for Chinese Sign Language (CSL). Example (13) illustrates that co-articulation of the negative headshake with the manual sign leads to ungrammaticality, the only option being articulation after the lexical sign (Yang/Fischer 2002, 176). (*
(13)
hs)
hs
understand ‘I don’t understand.’
[CSL]
Although other examples involving free-standing negative headshakes have been documented, they can be reduced to instances of negative answers to (real or rhetorical) questions, as shown in (14a) for NZSL (McKee 2006, 84), or to structures with contrastive topics, where the predicate is elided, as in (14b) from CSL (Yang/ Fischer 2002, 178). rhet-q
(14)
hs
a. worth go conference ‘Is it worth going to the conference? I don’t think so.’ t
[NZSL]
hs
b. hearing teachers [CSL] ‘(but some) hearing teachers do not [take care of deaf students].’ A different use of a free-standing negative headshake is the one described for Flemish Sign Language (VGT) (Van Herreweghe/Vermeerbergen 2006, 241), where it functions as a tag question after an affirmative sentence, as shown in (15). hsCyn
(15)
can also saturday morning / ‘It is also possible on Saturday morning, isn’t it?’
[VGT]
Headturn, a non-manual negative marker that is much less widespread than headshake, could be interpreted as a reduced form of the latter. It has been described for British Sign Language (BSL), CSL, Greek Sign Language (GSL), Irish Sign
326
III. Syntax Language (Irish SL), LIU, Quebec Sign Language (LSQ), Russian Sign Language (Zeshan 2006b, 11), and VGT. A third type of non-manual negative marker that has been reported because of its singularity is head-tilt, which is attested in some sign languages of the Eastern Mediterranean such as GSL, Lebanese Sign Language (LIL), LIU, and TİD. Just like the headshake, this non-manual is rooted in the negative gesture used in the surrounding hearing societies, but as part of the relevant sign language grammars, it obeys the particular constraints of each one of them. Although it tends to cooccur with a single negative sign, it can sometimes spread further, even over the whole clause, in which case it yields an emphatic reading of negation in GSL (cf. (16)). It can also appear on its own in GSL (unlike in LIL or LIU) (Antzakas 2006, 265).
(16)
ht index1 again go want-not ‘I don’t want to go (there) again.’
[GSL]
It is worth noting that when two manual negatives co-occur in the same sentence and are inherently associated with the same non-manual marker, the latter tends to spread between the two. This behavior reflects a more general phenomenon described for ASL as perseveration of articulation of several non-manuals (Neidle et al. 2000, 45⫺48): both at the manual and non-manual levels, “if the same articulatory configuration will be used multiple times, it tends to remain in place between those articulations (if this is possible)”. Spreading of a negative non-manual is common in sign languages where two negative signs can co-occur in the same clause, as described for TİD (Zeshan 2006c, 158 f.). If both manual negators are specified for the same non-manual, it spreads over the intervening sign (17a); if the non-manuals are different, they either remain distinct (17b) or one takes over and spreads over the whole domain, as in (17c). hs
(17)
a. none(2) appear no-no hs
[TİD]
ht
b. none(2) go^not hs
c. none(2) go^not Non-manual markers are recruited in sign language grammars for a wide range of purposes in the lexicon and in the different grammatical subcomponents (for an overview, see Pfau/Quer 2010). Given the types of distribution restrictions reported here and in the next section, the negative non-manuals appear to perform clear grammatical functions and cannot just be seen as intonational contours typical of negative sentences. Building on Pfau (2002), it has been proposed that in some sign languages (e.g., DGS and LSC), the negative headshake should be analyzed as a featural affix that modifies the prosodic properties of a base form, in a parallel fashion to tonal prosodies in tonal languages (Pfau/Quer 2007, 133; also cf. Pfau 2008). As a consequence of this characterization, its spreading patterns follow naturally and mirror the basic behavior of tone spreading in some spoken languages.
15. Negation
327
It is worth mentioning that another non-manual, headnod, is reported to systematically mark affirmation ⫺ be it in affirmative responses to questions or for emphasis. The LIS example in (18) illustrates the latter use of headnod. Geraci (2005) interprets it as the positive counterpart of negative headshake, both being the manifestation of the same syntactic projection encoding clausal polarity, in line with Laka (1990). hn
(18)
arrive someone ‘Someone did arrive.’
[LIS]
2.2.2. Facial expression Beyond head movements, other non-manuals are associated with the expression of negation. Among the lexically specified non-manuals, the ones that are more widespread crosslinguistically include frowning, squinted eyes, nose wrinkling, and lips spread, pursed or with the corners down. Other markers are more language-specific or even sign-specific, such as puffed cheeks, air puff, tongue protruding, and other mouth gestures. The more interesting cases are probably those in which negative facial non-manuals clearly have the grammatical function of negating the clause. Brazilian Sign Language (LSB) features both headshake and negative facial expression (lowered corners of the mouth or O-like mouth gesture), which can co-occur in negative sentences. However, it is negative facial expression (nfe) and not headshake that functions as the obligatory grammatical marker of negation, as the following contrast illustrates (Arrotéia 2005, 63). nfe
(19)
a.
ix1 1seeajoãoaix1 (not) ‘I didn’t see João.’
[LSB]
hs
b. *ix1
1seeajoãoaix1 (not)
Other facial non-manuals have also been described as sole markers of sentential negation for Israeli SL (mouthing lo ‘no, not’: Meir 2004, 111 f.), for LIU (negative facial expression: Hendriks 2007, 118 f.), and for TİD (puffed cheeks: Zeshan 2003, 58 f.).
3. Syntactic patterns of negation It has been observed across sign languages that negative signs show a tendency to occur sentence-finally, although this is by no means an absolute surface property. Unsurprisingly, negation, as a functional category, interacts with other functional elements and with lexical items as well (see section 2.1) and it lexicalizes as either a syntactic head or a phrase. Moreover, both the manual and non-manual compo-
328
III. Syntax nents of negation must be taken into account in the analysis of negative clauses. As expected, the range of actual variation in the syntactic realization of negation is greater than a superficial examination might reveal. In this section, we will look at the syntactic encoding of sentential negation, concentrating on some aspects of structural variation that have been documented and accounted for within the generative tradition. It should be mentioned, however, that the structural analyses of sign language negation are still limited and that many of the existing descriptions of negative systems do not offer the amount of detail required for a proper syntactic characterization.
3.1. Interaction of negation with other syntactic categories An interesting syntactic fact documented for Israeli SL is that different negators are selective as to the syntactic categories they can combine with (Meir 2004, 114 f.). As illustrated in (20), not, neg-exist(1), and neg-past can only co-occur with an adjective, a noun, or a verb, respectively. (20)
a. chair indexa comfortable not/*neg-past/*neg-exist(1/2) ‘The chair is/was not comfortable.’ b. index1 computer neg-exist(1/2)/*neg-past/*not ‘I don’t have a computer.’ c. index3 sleep neg-past/*neg-exist(1/2) ‘He didn’t sleep at all.’
[Israeli SL]
This is an important observation that deserves further exploration, also in other languages for which it has been noted that certain negators exhibit similar combinatorial restrictions. A clear case of an impact of negation on clausal structure has been documented for LSB. De Quadros (1999) describes and analyzes the distributional patterns of sentential negation with both agreeing and plain verbs and shows that only the former allow for preverbal negation (21a) while the latter bar preverbal negation and induce clause-final negation, as the examples in (21b) and (21c) show. De Quadros accounts for this difference in distribution within the framework of Generative Grammar. In particular, she derives this basic fact from the assumption that preverbal negation blocks the movement required for the lexical verb to pick up abstract agreement features in a higher syntactic position, thus resulting in clausefinal negation. Agreeing verbs, by virtue of carrying inflection overtly, do not need to undergo this type of movement and allow for a preverbal negator. neg
(21)
a.
ix johna no agiveb book ‘John does not give the book to her/him.’ neg
b. * ix johna no desire car (‘John does not like the car.’)
[LSB]
15. Negation
329 neg
c.
ix johna desire car no ‘John does not like the car.’
Interestingly, although both LSB and ASL are SVO languages, ASL does allow for the pattern excluded in LSB (21b), as is illustrated in (22). Such fine-grained crosslinguistic comparisons make it clear that surface properties require detailed analyses for each language, given that other factors in the particular grammars at hand are likely to play a role and lead to diverging patterns. neg
(22)
john not eat meat ‘John does not eat meat.’
[ASL]
3.2. Doubling Another interesting fact concerning the syntactic realization of negation has been noted for several languages (ASL, Petronio 1993; CSL, Yang/Fischer 2002; LSB, de Quadros 1999; NZSL, McKee 2006): negative markers are doubled in structures in which an emphatic interpretation ⫺ in some cases identified as focus ⫺ is at play. In this sense, negation resembles other categories that enter the same doubling pattern (modals, wh-words, quantifiers, lexical verbs, or adverbials). An example from CSL is displayed in (23), taken from Yang and Fischer (2002, 180). nfe
(23)
nfe
none/nothing master big-shape none/nothing ‘There is nothing to show that you master the whole shape first.’
[CSL]
There is no unified account of such doubling structures, which feature several other categories beyond negation. At least for ASL and LSB, however, analyses of double negatives have been proposed that interpret the clause-final instance of negation as a copy of the sentence-internal one occupying a functional head high up in the clausal structure. For ASL, it has been proposed that this position is the Cº head occurring on the right branch and endowed with a [Cfocus] feature (Petronio 1993). For LSB, de Quadros (1999) argues that the clause-final double is in fact basegenerated in the head of a Focus Phrase under CP; moving everything below Focusº to the specifier of FocusP results in the attested linear order. For both analyses, it is crucial that doubling structures always feature heads and never phrases (for an opposing view on this leading to a different analysis in ASL, see Neidle et al. 2000; cf. the discussion about the proper characterization of wh-movement in ASL in chapter 14, on which the analysis of doubling structures also hinges). Interestingly, the categories that are susceptible to undergoing doubling can merge together, showing the same behavior as a single head, as exemplified for the modal can and negation in (24) from ASL (Petronio 1993, 134).
330
III. Syntax neg
(24)
ann can’t read can’t ‘Ann CAN’T read.’
[ASL]
Only a single double can appear per clause, be it matrix or embedded. This restriction follows naturally from an interpretation as emphatic focus, which generally displays such a constraint. This line of analysis builds on doubling data that do not feature a pause before the sentence-final double. Petronio and Lillo-Martin (1997) distinguish these cases from other possible structures in which the repeated constituent at the end of the clause is preceded by a pause. As Neidle et al. (2000) propose, the latter cases are amenable to an analysis as tags in ASL.
3.3. Spreading A general property of double negative structures is that the non-manual feature can spread between the two negative elements. Therefore, next to cases like (23) above, CSL also provides structures where spreading is at play, such as (25) (Yang/ Fischer 2002, 181). nfe
(25)
start time not-need grab details not-need ‘Don’t pay attention to a detail at the beginning.’
[CSL]
This is an instance of what Neidle et al. (2000, 45) have dubbed as perseveration of a non-manual articulation (see section 2.2). In this case, perseveration of the non-manual takes place between two identical manual signs, a situation different from the one described in (17) above. Spreading patterns of non-manual negation are subject to restrictions. It is clear, for instance, that if a topic or an adjunct clause is present sentence-initially, the negative marker cannot spread over it and supersede other non-manuals associated with that constituent, as noted, for instance, in Liddell (1980, 81) for the ASL example in (26). t
(26)
neg
dog chase cat ‘As for the dog, it didn’t chase the cat.’
[ASL]
It has been argued that spreading of negative non-manuals is clearly restricted by syntactic factors. Specifically, in ASL, headshake can co-occur with the manual negator not only or optionally spread over the manual material within the verb phrase (VP) that linearly follows the negator, as exemplified in (27a). However, if the manual negation is absent, spreading is obligatory, as the contrast in (27b⫺c) shows. Neidle et al. (2000) interpret this paradigm as evidence that headshake is the overt realization of a syntactic feature [Cneg] residing in Negº, the head of NegP, which needs to associate with manual material. Generally, non-manuals must spread whenever there is no lexical material occupying the relevant functional head,
15. Negation
331
the spreading domain being the c-command domain of that head. In the case of the headshake, the relevant head is Negº and the c-command domain is the VP. (
(27)
a.
neg)
john not buy house
[ASL]
neg
b.
john buy house ‘John didn’t buy the house.’ neg
c. * john buy house Another piece of evidence in favor of the syntactic nature of negative non-manual spreading is offered in Pfau (2002, 287) for DGS, where it is shown that, unlike ASL, spreading of the headshake over the VP material is optional, even if the manual negator is absent from the structure. However, the spreading must target whole constituents (28a) and is barred otherwise, as in (28b), where it is articulated only on the adjectival sign of the object NP. neg
(28)
a.
man flower buy ‘The man is not buying a flower.’
[DGS]
neg
b. * man flower red buy In some sign languages at least, non-manual spreading can have interpretive effects, which strictly speaking renders it non-optional. This is the case in LSC, where spreading over the object NP results in a contrastive corrective reading of negation (Quer 2007, 44). hs
(29)
hn
santi vegetables eat, fruit ‘Santi doesn’t eat vegetables, but fruit (he does).’
[LSC]
Spreading of the headshake over the whole sentence gives rise to an interpretation as a denial of a previous utterance, as in (30). hs
(30)
santi summer u.s. go [LSC] ‘It is not true/It is not the case that Santi is going to the U.S. in the summer.’
A parameter of variation between DGS and LSC has been detected in the expression of negation: while in LSC, the non-manual marker can co-appear with the manual negator only (31), in DGS, it must extend at least over the predicate as well, as is evident from the sentence pair in (32). hs
(31)
santi meat eat not ‘Santi does not eat meat.’
[LSC]
332
III. Syntax neg
(32)
a.
mother flower buy not ‘Mother is not buying a flower.’
[DGS]
neg
b. * mother flower buy not Pfau and Quer (2002, 2007) interpret this asymmetry as a reflection of the wellknown fact that negative markers can have head or phrasal status syntactically (for an overview, see Zanuttini 2001). They further assume that headshake is the realization of a featural affix. This affix must be co-articulated with manual material, on which it imposes a prosodic contour consisting in headshake. In LSC, the manual marker not is a syntactic head residing in Negº and [Cneg] naturally affixes to it, giving rise to structures such as (31). If the structure does not feature not, then the predicate will have to raise to Negº, where [Cneg] will combine with it and trigger headshake on the predicate sign, as in (1b). The essentials of both types of derivations are depicted in Figure 15.7.
Fig. 15.7: LSC negative structures, with and without negative marker not.
In contrast, DGS not is a phrasal category that occupies the (right-branching) Specifier of NegP (the headshake it carries is lexically marked). Since [Cneg] needs to combine with manual material, it always attracts the predicate to Negº (cf. Figure 15.8), thus explaining the ungrammaticality of (32b) with headshake only on not. This type of analysis is able to account for the negation patterns found in other languages too, if further specific properties of the language are taken into account. This is the case for ASL, as shown in Pfau and Quer (2002). It also provides a natural explanation for the availability of (manual) negative concord in LSC and its absence in DGS, as will be discussed in the next section. The idiosyncratic behavior of negative modals (and semi-modals) also follows naturally from this line
15. Negation
333
Fig. 15.8: DGS negative structure, with negative marker not and obligatory V-movement to Neg.
of analysis: being generated in the Tense head, they are always forced to raise to Negº in order to support the [Cneg] affix and surface as forms with a cliticized negation or as suppletive negative counterparts of the positive verb (cf. section 2.1 above).
4. Negative concord It is a well-known fact that in many languages two or more negative items can appear in the same clause without changing its polarity, which remains negative. This phenomenon is known as split negation or negative concord (for a recent overview see Giannakidou 2006). In non-manual dominant sign languages (i.e. those languages where a clause can be negated by non-manual negation only), the nonmanual negative marker (the [Cneg] affix in Pfau/Quer’s (2002) proposal) must be taken to be the main negator. Since in this type of language, manual sentential negation often co-occurs with non-manual negation yielding a single negation reading, it must be concluded that negative concord is at play between the manual and non-manual component, as in the following LSC example (Quer 2002/2007, 45). hs
(33)
ix1 smoke no ‘I do not smoke.’
[LSC]
334
III. Syntax In addition, a second type of negative concord has been attested at the manual level, namely when two or more negative manual signs co-appear in a clause but do not contribute independent negations to the interpretation, as exemplified in the LSC example in (34). Crucially, the interpretation of this sentence is not ‘Your friend never doesn’t come (i.e. he always comes)’. hs
(34)
hs
friend ix2come no never ‘Your friend never comes.’
[LSC]
With a few exceptions (Arrotéia 2005 on LSB; Hendriks 2007 on LIU; Pfau/Quer 2002, 2007 and Quer 2002/2007 on LSC; Wood 1999 on ASL), the phenomenon of negative concord has not received much attention in descriptions of sign language negation systems. However, scattered cases of negative concord examples are reported for languages such as BSL (Sutton-Spence/Woll 1999, 77), CSL (Yang/Fischer 2002, 181), TİD (Zeshan 2006c, 157) and VGT (Van Herreweghe/Vermeerbergen 2006, 248). Some of the examples are characterized as encoding emphatic or strong negation. See (35) for a CSL example in which a lexically negative verb co-occurs with sentential negation. Again, the combination of two negative signs does not yield a positive reading. nfe
(35)
index dislike see no ‘I don’t like to watch it.’
[CSL]
In the LIU example in (36), cliticized negation is duplicated in the same clause by the basic clause negator neg (Hendriks 2007, 124). y/n
(36)
hs
maths, like^neg index1neg ‘I don’t like maths.’
[LIU]
As is the case in the better-known instances of spoken languages, negative concord is not a uniform phenomenon but rather shows parameters of variation. Comparable variation has also been documented across structurally very similar languages like LSC and DGS (Pfau/Quer 2002, 2007). While both display negative concord between manual and non-manual components of negation, only the former features negative concord among manual signs. This follows partly from the fact that the basic manual clause negator in LSC is a head category sitting in Negº, whereas in DGS, it is a phrase occupying the Specifier of NegP, as depicted in Figures 15.7 and 15.8 above. It might therefore be argued that the difference follows from the availability of a Specifier position for further phrasal negative signs in LSC, which is standardly occupied in DGS. Nevertheless, this cannot be the whole explanation, because LSC allows for two phrasal negatives under the same negative concord reading (cf. (37)), a situation that could be expected in DGS, contrary to fact.
15. Negation
(37)
335
hs hs hs ix1 smoke neg2 never ‘I never ever smoke.’
[LSC]
The difference must be attributed to the inherent properties of negative signs, which may or may not give rise to concord readings depending on the language. DGS can thus be characterized as a non-negative concord language at the manual level, despite having split negation (i.e., the non-manual negative affix and the manual sentential negator not jointly yield a single sentential negation reading). The presence of further negative signs leads to marked negation readings or simply to ungrammaticality. LIS is another language that does not display negative concord structures (Geraci 2005).
5. Lexical negation and morphological idiosyncrasies of negatives In section 2.1, some processes of negative affixation were mentioned that yield complex signs conveying sentential negation. A number of those processes have also been shown to result in the formation of nouns and adjectives with negative meaning, normally the antonym of a positive lexical counterpart, with the important difference that in these items, the negation does not have sentential scope. As is common in processes of lexical formation, however, the output can have an idiosyncratic meaning that does not correspond transparently to its antonym. In accordance with the lack of sentential scope, no negative non-manuals co-occur and if they do as a consequence of lexical marking, they never spread. Occasionally, negative affixes are grammaticalized from negative predicates. For Israeli SL, for instance, Meir (2004, 15) argues that not-exist is a suffix which originates from the negative existential predicate not-exist(1). Independent of the category of the root, the suffix invariably gives an adjective as a result, as can be observed in (38). (38)
a. interesting+not-exist b. shame+not-exist c. strength+not-exist
‘uninteresting’ ‘shameless’ ‘exhausted’
[Israeli SL]
A negative handshape characterized by pinkie extension has been shown to be operative in some East Asian sign languages in the formation of positive-negative pairs of lexical items. In HKSL, the negative handshape, which as a stand alone sign means bad/wrong, can replace the handshape of the sign it is affixed to, thus functioning as a simultaneous affix, or else be added sequentially after the lexical sign, as illustrated for the sign taste/mouth in Figure 15.9 (Zeshan 2004, 45). In this case, the resulting form mouth^bad (‘dumb’) has non-transparent meaning compositionally derived from its parts. Next to transparent derivations such as reasonable/unreasonable, appealing/unappealing, and lucky/unlucky, some opaque ones are also attested, such as mouth^bad ‘dumb’ (see Figure 15.9), ear^bad ‘deaf’, and eye^bad ‘blind’ (Tang 2006, 223).
336
III. Syntax
mouth^bad Fig. 15.9: Example of sequential negative handshape in deriving an adjective in HKSL. Copyright © 2004 by Ulrike Zeshan. Reprinted with permission.
In a number of CSL signs, such as those in (39), the positive-negative pattern is marked by an actual opposition of handshapes: the positive member of a pair has the thumb up handshape (2), the negative one an extended pinkie (Yang/ Fischer 2002, 187). (39)
a. b. c. d.
correct/right neat skillfull fortunate
wrong dirty unskillfull unfortunate
[CSL]
Some formational features that occur in sentential negatives can appear in irregular lexically negative signs as well, such as the diagonal inward-outward movement that occurs in DGS modals but also in the sign not^valid, or a change in hand orientation in FinSL (for an overview, see Zeshan 2004, 41 ff.). Sometimes the contrasts are not really productive, like the orientation change in the pair legal/illegal in LIU (Hendriks 2007, 114). Beyond lexically marked negation, it is worth mentioning that for some sign languages, certain peculiar features have been noted in the morphology associated with negation. One of these features is person inflection of the sign nothing in NZSL (McKee 2006, 85). The sign, which is standardly used to negate predicates, can be articulated at person loci and is interpreted in context. For instance, when inflected for second person, it will be interpreted as ‘You don’t have/You aren’t/ Not you.’ It can also show multiple inflection through a lateral arc displacement, much in the same way as in plural verb agreement (see chapter 7). Although it has not been interpreted as such in the original source, this might be a case of verb ellipsis where the negative sign acquires the properties of a negative auxiliary. Another interesting morphological idiosyncrasy is reported for negated existentials in NS (Morgan 2006, 123): the language has lexicalized animacy in the domain of existential verbs and possesses a specific item restricted to the expression of exis-
15. Negation tence with animate arguments, exist-animate, as opposed to an unrestricted existunmarked. While the former can co-occur with bimanual not, exist-unmarked cannot and negation of existence is conveyed by not, nothing, zero or from-scratch.
6. Concluding remarks This overview of the grammatical and lexical encoding of negation and negative structures across sign languages has documented the linguistic variation existing in this domain despite the still limited range of descriptions and analyses available. Even what might be considered a modality-dependent feature, namely the nonmanual encoding of negation, turns out not to function uniformly in the expression of negation across the sign languages studied. Rather, its properties and distribution are constrained by the language-particular grammars they are part of. At the same time, however, it is also striking to notice how recurrent and widespread some morphological features are in the negation systems described. These recurrent patterns offer a unique window into grammaticalization pathways of relatively young languages in the visual-gestural modality. In any case, the scholarship reported here should have made it clear that much more detailed work on a broader range of sign languages is needed to get better insights into many of the issues that have been already raised for linguistic theory and description so far.
7. Literature Antzakas, Klimis 2006 The Use of Negative Head Movements in Greek Sign Language. In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 258⫺269. Aronoff, Mark/Meir, Irit/Sandler, Wendy 2005 The Paradox of Sign Language Morphology. In: Language 81, 301⫺344. Arrotéia, Jéssica 2005 O Papel da Marcação Não Manual nas Sentenças Negativas em Língua de Sinais Brasileira (LSB). MA Thesis, Universidade Estadual de Campinas. Atkinson, Joan/Campbell, Ruth/Marshall, Jane/Thacker, Alice/Woll, Bencie 2004 Understanding ‘Not’: Neuropsychological Dissociations Between Hand and Head Markers of Negation in BSL. In: Neuropsychologia 42, 214⫺229. Baker-Shenk, Charlotte 1983 A Micro-analysis of the Nonmanual Components of Questions in American Sign Language. PhD Dissertation, University of California, Berkeley. Corina, David P./Bellugi, Ursula/Reilly, Judy 1999 Neuropsychological Studies of Linguistic and Affective Facial Expressions in Deaf Signers. In: Language and Speech 42, 307⫺331. Geraci, Carlo 2005 Negation in LIS (Italian Sign Language). In: Bateman, Leah/Ussery, Cheron (eds.), Proceedings of the North East Linguistic Society (NELS 35). Amherst, MA: GLSA, 217⫺230.
337
338
III. Syntax Giannakidou, Anastasia 2006 N-words and Negative Concord. In: Everaert, Martin/Riemsdijk, Henk van (eds.), TheBlackwell Companion to Syntax, Volume III. Oxford: Blackwell, 327⫺391. Hendriks, Bernadet 2007 Negation in Jordanian Sign Language: A Cross-linguistic Perspective. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation. Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 103⫺128. Hendriks, Bernadet 2008 Jordanian Sign Language: Aspects of Grammar from a Cross-linguistic Perspective. PhD Dissertation, University of Amsterdam. Utrecht: LOT. Laka, Itziar 1990 Negation in Syntax: The Nature of Functional Categories and Projections. PhD Dissertation, MIT, Cambridge, MA. Meir, Irit 2004 Question and Negation in Israeli Sign Language. In: Sign Language & Linguistics 7(2), 97⫺124. Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Ben/Lee, Robert G. 2000 The Syntax of American Sign Language. Functional Categories and Hierarchical Structure. Cambridge, MA: MIT Press. Petronio, Karen 1993 Clause Structure in American Sign Language. PhD Dissertation, University of Washington. Petronio, Karen/Lillo-Martin, Diane 1997 WH-movement and the Position of Spec-CP: Evidence from American Sign Language. In: Language 73(1), 18⫺57. Pfau, Roland 2002 Applying Morphosyntactic and Phonological Readjustment Rules in Natural Language Negation. In: Meier, Richard P./Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 263⫺295. Pfau, Roland 2008 The Grammar of Headshake: A Typological Perspective on German Sign Language Negation. In: Linguistics in Amsterdam 2008(1), 37⫺74. [http://www.linguisticsinamsterdam.nl/] Pfau, Roland/Quer, Josep 2002 V-to-Neg Raising and Negative Concord in Three Sign Languages. In: Rivista di Grammatica Generativa 27, 73⫺86. Pfau, Roland/Quer, Josep 2007 On the Syntax of Negation and Modals in German Sign Language (DGS) and Catalan Sign Language (LSC). In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation. Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 129⫺161. Pfau, Roland/Quer, Josep 2010 Nonmanuals: Their Grammatical and Prosodic Roles. In: Brentari, Diane (ed.), Sign Languages (Cambridge Language Surveys). Cambridge: Cambridge University Press, 381⫺402. Pfau, Roland/Steinbach, Markus 2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign Languages. In: Linguistics in Potsdam 24, 3⫺98. [http://www.ling.uni-potsdam.de/lip/] Quadros, Ronice M. de 1999 Phrase Structure of Brazilian Sign Language. PhD Dissertation, Pontifíca Universidade Católica do Rio Grande do Sul, Porto Alegre.
15. Negation
339
Quer, Josep 2002/ Operadores Negativos en Lengua de Signos Catalana. In: Cvejanov, Sandra B. (ed.), 2007 Lenguas de Señas: Estudios de Lingüística Teórica y Aplicada. Neuquén: Editorial de la Universidad Nacional del Comahue, Argentina, 39⫺54. Quer, Josep/Boldú, Rosa Ma. 2006 Lexical and Morphological Resources in the Expression of Sentential Negation in Catalan Sign Language (LSC). In: Actes del 7 è Congrés de Lingüística General, Universitat de Barcelona. CD-ROM. Reilly, Judy/Anderson, Diane 2002 FACES: The Acquisition of Non-Manual Morphology in ASL. In: Morgan, Gary/ Woll, Bencie (eds.), Directions in Sign Language Acquisition. Amsterdam: Benjamins, 159⫺181. Savolainen, Leena 2006 Interrogatives and Negatives in Finnish Sign Language: An Overview. In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 284⫺302. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language. An Introduction. Cambridge: Cambridge University Press. Tang, Gladys 2006 Questions and Negation in Hong Kong Sign Language. In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 198⫺224. Van Herreweghe, Mieke/Vermeerbergen, Myriam 2006 Interrogatives and Negatives in Flemish Sign Language. In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 225⫺256. Yang, Jun Hui/Fischer, Susan D. 2002 Expressing Negation in Chinese Sign Language. In: Sign Language & Linguistics 5(2), 167⫺202. Zanuttini, Raffaella 2001 Sentential Negation. In: Baltin, Mark/Collins, Chris (eds.), The Handbook of Contemporary Syntactic Theory. Oxford: Blackwell, 511⫺535. Zeshan, Ulrike 2003 Aspects of Türk İşaret Dili (Turkish Sign Language). In: Sign Language & Linguistics 6(1), 43⫺75. Zeshan, Ulrike 2004 Hand, Head, and Face: Negative Constructions in Sign Languages. In: Linguistic Typology 8(1), 1⫺58. Zeshan, Ulrike (ed.) 2006a Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press. Zeshan, Ulrike 2006b Negative and Interrogative Constructions in Sign Languages: A Case Study in Sign Language Typology. In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 28⫺68. Zeshan, Ulrike 2006c Negative and Interrogative Structures in Turkish Sign Language (TİD). In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 128⫺164.
Josep Quer, Barcelona (Spain)
340
III. Syntax
16. Coordination and subordination 1. 2. 3. 4. 5.
Introduction Coordination Subordination Conclusion Literature
Abstract Identifying coordination and subordination in sign languages is not easy because morphosyntactic devices which mark clause boundaries, such as conjunctions or complementizers, are generally not obligatory. Sometimes, however, non-manuals and certain syntactic diagnostics may offer a solution. Constituent boundaries can be delineated through eye blinks, and syntactic domains involved in coordination can be identified through head nods and body turns. In addition to these modality specific properties in delineating coordination and subordination, diagnostics of grammatical dependency defined in terms of constraints of syntactic operations is often useful. We observe that the island constraints involved in wh-extraction from coordination and subordination are also observed in some sign languages, and scope of the negator and Q-morpheme impose syntactic constraints on these constructions. Lastly, cross-linguistic variation is observed in some sign languages, as revealed, for instance, by gapping in coordinate structures, subject pronoun copy in sentential complements, and choice of relativization strategy.
1. Introduction In all natural languages, clauses can be combined to form complex sentences. Clause combining may generally involve like categories, a characteristic of coordination, or unlike categories, as in subordination. In his typological study on spoken languages, Lehmann (1988) defines coordination and subordination in terms of grammatical dependency. According to him, dependency is observed with subordination only and coordination is analyzed as involving only sister relations between the conjuncts. Recently, syntactic analysis within the generative framework assumes that natural languages realize a hierarchical syntactic structure, with grammatical dependencies expressed at different levels of the grammar. However, spoken language research has demonstrated that this quest for evidence for dependency is not so straightforward. As Haspelmath (2004) puts it, sometimes it is difficult to distinguish coordination from subordination as mismatches may occur where two clausal constituents are semantically coordinated but syntactically subordinated to one another, or vice versa. It is equally difficult, if not more so, in the case of sign languages, which are relatively ‘younger’ languages. They lack a written form which encourages the evolution of conjunctions and complementizers as morphosyntactic devices for clause combination (Mithun 1988). In this chapter, we assume that bi-clausal constructions as involved in
16. Coordination and subordination coordination and subordination show dependency relations between constituents X and Y. Such dependency manifests itself in some abstract grammatical operations such as extraction and gapping. We will provide an overview of current research on coordinate and subordinate structures in sign languages and examine whether these grammatical operations in coordination and subordination are also operative in sign languages. At this juncture, a crucial question to ask is what marks clause boundaries in sign languages, or precisely what linguistic or prosodic cues are there to signal coordination and subordination. Morphosyntactic devices like case marking, complementizers, conjunctions, or word order are common cues for identifying coordinate and subordinate structures in spoken languages. On the sign language front, however, there is no standardized methodology for identifying clause boundaries, as pointed out in Johnston/ Schembri (2007). We shall see that it is not obligatory for sign languages to incorporate conjunctions or complementizers. Before we go into the analysis, we will briefly discuss some recent attempts to delineate clause boundaries in sign language research. Research on spoken language prosody attempts to study the interface properties of phonology and syntax based on prosodic cues like tone variation or pauses to mark clause boundaries. Although results show that there is no isomorphic relationship between prosodic and syntactic constituents, structures are generally associated with Intonational Phrases (IP) in the prosodic domain. Edmonds (1976) claimed that the boundary of a root sentence delimits an IP. Nespor and Vogel (2007) and Selkirk (2005), however, found that certain non-root clauses also form IP domains; these are parentheticals, non-restrictive relative clauses, vocatives, certain moved elements, and tags. In sign language research, there is a growing interest in examining the roles of nonmanuals in sign languages. Pfau and Quer (2010) categorize them into (i) phonological, (ii) morphological, (iii) syntactic, and (iv) pragmatic. In this chapter, we will examine some of these functions of non-manuals. Crucial to the current analysis is the identification of non-manuals that mark clause boundaries within which we can examine grammatical dependency in coordination and subordination. Recently, non-manuals like eye blinks have been identified as prosodic cues for clause boundaries (Wilbur 1994; Herrmann 2010). Sze (2008) and subsequently Tang et al. (2010) found that while eye blinks generally mark intonational phrases in many sign languages, Hong Kong Sign Language (HKSL) uses them to mark phonological phrases as well. Sandler (1999) also observed that sentence-final boundaries are further marked by an across-the-board change of facial expression, head position, eye gaze direction, or eye blinks. These studies on prosodic cues lay the foundation for our analysis of coordination (section 2) and subordination (section 3) in this chapter.
2. Coordination 2.1. Types of coordination Coordination generally involves the combining of at least two constituents of the like categories either through juxtaposition or conjunctions. Pacoh, a Mon-Khmer mountain language of Vietnam, for instance, juxtaposes two verb phrases (VPs) without a conjunction (1) (Watson 1966, 176).
341
342
III. Syntax (1)
Do [cho cho t’ôq apây] t’ôq cayâq, she return to husband return to grandmother ‘She returns to (her) husband and returns to her grandmother.’
[Pacoh]
Wilder (1997) proposes to analyze conjuncts as either determiner phrases (DPs) or complementizer phrases (CPs) with ellipsis of terminal phonological material and not as deletion of syntactic structure as part of the derivation. Here we focus on VPs and CPs. We leave it open whether the structure of the conjuncts remains ‘small’ (i.e., only VPs) or ‘large’ (i.e., CPs) at this stage of analysis. In many languages, conjunctions are used in different ways to combine constituents. Frequently, a conjunction is assigned to the last conjunct, as shown by the Cantonese example in (2), but some languages require one for each conjunct, either before or after it. Also, some languages use different conjunctions for different grammatical categories. In Upper Kuskokwim Athabaskan, for instance, ʔił is used for noun phrase (NP) conjuncts and ts’eʔ for clausal conjuncts (Kibrik 2004). (3) provides a conjoined sentence with the conjunction ts’eʔ for every clausal conjunct (Kibrik 2004, 544). (2)
ngo3 kam4-maan3 VP[ VP[ jam2-zo2 tong1] tung4 pro-1 last-evening drink-asp soup and ‘I drank soup and ate bread last night.’
(3)
nongw donaʔ totis łeka [Upper Kuskokwim Athabaskan] from.river upriver portage dog ʔisdlal ts’eʔ ch’itsan’ ch’itey nichoh ts’eʔ I.did.not.take and grass too.much tall and … ‘I did not take the dogs to the upriver portage because the grass was too tall, and …’
VP[
sik6-zo2 min6-baau1] ] eat-asp bread [Cantonese]
There have been few reports on conjunctions in sign languages (see e.g., Waters/SuttonSpence (2005) for British Sign Language). American Sign Language (ASL) has overt lexical markers such as and or but, as in (4) (Padden 1988, 95). Padden does not specifically claim these overt lexical markers to be conjunctions or discourse markers. According to her, they may be true conjunctions in coordinate structures if a pause appears between the two clausal conjuncts and the second conjunct is accompanied by a sharp headshake (hs). hs
(4)
1persuadei
but change mind ‘I persuaded her to do it but I/she/he changed my mind.’
[ASL]
Although manual signs like and, but, and or are used by some Deaf people in Hong Kong, they normally occur in signing that follows the Chinese word order. In Australian Sign Language (Auslan), and does not exist, but but does, as shown in (5) (Johnston/Schembri 2007, 213). (5)
k-i-m like cat but p-a-t prefer dog ‘Kim likes cats but Pat prefers dogs.’
[Auslan]
16. Coordination and subordination
343
Instead of using an overt conjunction, juxtaposition is primarily adopted, especially in conjunctive coordination (‘and’) for simultaneous and sequential events (e.g., Johnston/ Schembri (2007) for Auslan; Padden (1988) for ASL; van Gijn (2004) for Sign Language of the Netherlands (NGT); Vermeerbergen/Leeson/Crasborn (2007) for various sign languages). In the ASL examples in (6) and (7), two clauses are juxtaposed for sequential and simultaneous events (Padden 1988, 85).
(6)
Sequential events: igive1 money, 1index get ticket ‘He’ll give me the money, then I’ll get the tickets.’
[ASL]
(7)
Simultaneous events: house blow-up, car icl:3-flip-over ‘The house blew up and the car flipped over.’
[ASL]
HKSL examples showing juxtaposition for conjunctive coordination (8a,b), disjunction (8c), and adversative coordination (8d) are presented below. (8a) and (8b) encode sequential and simultaneous events, respectively. (8b) confirms the observation made by Tang, Sze, and Lam (2007) that juxtaposing two VP conjuncts as simultaneous events is done by assigning each event to a manual articulator. In this example, eatchips is encoded by the signer’s right hand, and drink-soda by his left hand. As for (8c), if it turns out that either is a conjunction, this sign conforms to a distribution of conjunctions discussed in Haspelmath (2004), according to which it occurs obligatorily after the last conjunct (bl = blink, hn = head nod, ht = head turn, bt = body turn). bl
(8)
bl hn
a. mother door
cl:unlock-door, cl:push-open,
[HKSL]
bl hn
cl:enter house ‘Mother unlocked the door, pushed it open (and) went inside.’ bl hn
b. boy ix3 sita, chips, soda, ht right
ht left
ht right
eat-chips, drink-soda, eat-chips, …. ‘The boy is sitting here, he is eating chips (and) drinking soda.’ bl
bl hnCbt right
c. ix1 go-to beijing, (pro1) take-a-plane, bl hnCbt left
bl
bl
take-a-train, either doesn’t-matter ‘I am going to Beijing. I will take a plane or take a train. Either way, it doesn’t matter.’
344
III. Syntax bl
bl hnChtCbt forward
d. exam come-close,
ruth diligent do-homework,
bl hnChtCbt backward
hannah lazy watch-tv ‘The exam is coming close; Ruth is diligently doing her homework (but) Hannah is lazy and watches TV.’ There is little discussion about non-manuals for coordination in the sign language literature. However, it seems that non-manuals are adopted when lexical conjunctions are absent in HKSL. In a great majority of cases, we observe an extended head nod that is coterminous with the conjunct, and the clause boundaries are marked by an eye blink. Liddell (1980, 2003) observes that syntactic head nods, which are adopted to assert the existence of a state or a process, are larger, deeper, and slower in articulation. In his analysis, a syntactic head nod obligatorily shows up when the verb is gapped or absent. However, in HKSL, this type of head nods occurs whether or not the verb is absent. In (8a) and (8b), the head nods are adopted to assert a proposition. In a neutral context, conjunctive coordination has head nods only. Disjunction requires both head nods and body turn to the left and right for different conjuncts, obligatorily followed by a manual sign either (8c). Adversative conjunction may involve either head turn or forward and backward body leans for the conjuncts, in addition to head nods (8d). In sum, we observe three common types of coordination in sign languages. Juxtaposition appears to be more common than coordination involving manual conjunctions. In HKSL, these conjuncts are usually coterminous with a head nod and end with an eye blink, indicating a constituent boundary of some kind. Also, non-manuals like head turn or body leans interact with the types of coordination.
2.2. Diagnostics for coordination In this section, we will briefly summarize three diagnostics which are said to be associated with coordination in spoken languages: extraction, gapping, and negation. We will investigate whether coordination in sign languages is also sensitive to the constraints involved in these grammatical operations.
2.2.1. Extraction It has been commonly observed in spoken languages that movement out of a coordinate structure is subject to the Coordinate Structure Constraint given in (9). (9)
Coordinate Structure Constraint (CSC) In a coordinate structure, no conjunct can be moved, nor may any element contained in a conjunct be moved out of that conjunct. (Ross 1967, 98 f.)
The CSC prevents movement of an entire conjunct (10a) or a constituent within a conjunct (10b) out of a coordinate structure.
16. Coordination and subordination (10)
345
a. *Whati did Michael eat and ti? b. *Whati did Michael play golf and read ti?
Padden (1988) claimed that ASL also obeys the CSC. In (11), for instance, topicalizing an NP object out of a coordinate structure is prohibited (Padden 1988, 93; t = nonmanual topic marking; subscripts appear as in the original example). t
(11)
*flower, 2give1 money, jgivei ‘Flowers, he gave me money but she gave me.’
[ASL]
A’-movement such as topicalization and wh-question formation in HKSL also leads to similar results. Topics in HKSL occupy a position in the left periphery, whereas the wh-arguments are either in-situ or occupy a clause-final position (Tang 2004). The following examples show that extraction of an object NP from either the first or second VP conjunct in topicalization (12b,c) or wh-question (13b,c) is disallowed. (12)
a. first group responsible cooking, second group [HKSL] responsible design game ‘The first group is responsible for cooking and the second group is responsible for designing games.’ t
b. *cookingi first group responsible ti, second group responsible design game t
c. *design gamei, first group responsible cooking, second group responsible ti (13)
a. yesterday dad play speedboat, [HKSL] eat cow^cl:cut-with-fork-and-knife ‘Daddy played speedboat and ate steak yesterday.’ b. *yesterday dad play ti, eat cow^cl:cut-with-fork-and-knife whati Lit. ‘*What did daddy play and eat steak?’ c. *yesterday dad play speedboat, eat whati? Lit. ‘*What did daddy play speedboat and eat?’
Following Ross (1967), Williams (1978) argues that the CSC can be voided if the grammatical operation is in ‘across-the-board’ (ATB) fashion. In the current analysis, this means that an identical constituent is extracted from each conjunct in the coordinate structure. In (14a) and (14b), a DP that bears an identical grammatical relation in both conjuncts has been extracted. Under these circumstances, no CSC violation obtains. (14)
a. John wondered whati [Peter bought ti] and [the hawker sold ti] b. The mani who ti loves cats and ti hates dogs …
However, ATB movement fails if the extracted argument does not bear the same grammatical relation in both conjuncts. In (15), the DP a man cannot be extracted because it is the subject of the first conjunct but the object of the second conjunct.
346
III. Syntax (15)
*A mani who ti loves cats and the woman hates ti …
ATB movement also applies to coordinate structures in ASL and HKSL, as shown in (16a), from Lillo-Martin (1991, 60), and in (16b). In these examples, topicalizing the grammatical object of both conjuncts is possible if the topic is the grammatical object of both conjuncts and encodes the same generic referent. However, just as in (15), ATB movement is disallowed in the HKSL example in (16c) because the fronted DP [ixa boy] does not share the same grammatical relation with the verb in the two TP conjuncts. t
(16)
a. athat moviei, bsteve like ei but c julie dislike ei ‘That moviei, Steve likes ei but Julie dislikes ei.’
[ASL]
t
b. orangei, mother like ti, father dislike ti ‘Orange, mother likes (and) father dislikes.’
[HKSL]
top
c. *ixa boyi, ti eat chips, girl like ti Lit. ‘As for the boy, (he) eats chips (and) the girl likes (him).’
[HKSL]
However, while topicalization in ATB fashion works in HKSL, it fails with wh-question formation even if the extracted wh-element bears the same grammatical relation in both TP conjuncts, as shown in (17). Obviously, the wh-operator cannot be co-indexed with the two wh-traces in (17). Instead, each clause requires its own wh-operator, implying that they are two independent clauses (18). wh
(17)
*mother like ti, father dislike ti, whati? Lit. ‘What does mother like and father dislike?’
(18)
mother like tj whatj? father dislike ti, whati? Lit. ‘What does mother like? What does father dislike?’
wh
[HKSL]
wh
In sum, the data from ASL and HKSL indicate that extraction out of a coordinate structure violates the CSC. However, it is still not clear why topicalization in ATB fashion yields a licit structure while this A’-movement fails in wh-question formation ⫺ at least in HKSL. Assuming a split-CP analysis with different levels for interrogation and topicalization, one might argue that the difference is due to the directionality of SpecCP in HKSL. As the data show, the specifier position for interrogation is in the right periphery (18) while that for topicalization is on the left (16b) (see chapter 14 for further discussion on wh-questions and the position of SpecCP). Possibly, the direction of SpecCP interacts with ATB movement. Further research is required to verify this issue.
2.2.2. Gapping In spoken language, coordinate structures always yield a reduction of the syntactic structure and ellipsis has been put forward to account for this phenomenon. One in-
16. Coordination and subordination
347
stance of ellipsis is gapping. In English, the verb in the second clausal conjunct can be ‘gapped’ under conditions of identity with the verb in the first conjunct (19a). In fact, cross-linguistic studies show that the direction of gapping in coordinate structures is dependent upon word order (Ross 1970, 251). In particular, SVO languages like English show forward gapping in the form of SVO and SO (i.e., deletion of the identical verb in the second conjunct); hence (19b) is ungrammatical because the verb from the first conjunct is gapped. In contrast, SOV languages show backward gapping in the form of SO and SOV (i.e., deletion of the identical verb in the first conjunct), as data from Japanese shows (20a). If the verb of the second conjunct is gapped, the sentence is ungrammatical (20b). (19)
a. [Sally eats an apple] and [Paul Ø a candy]. b. *[Sally Ø an apple] and [Paul eats a candy].
(20)
a.
[Sally-wa lingo-o Ø], [Paul-wa ame-o tabe-da] Sally-top apple-acc Paul-top candy-acc eat-past Lit. ‘Sally an apple and Paul ate a candy.’ b. *[Sally-wa lingo-o tabe-te], [Paul-wa ame-o Ø] Sally-top apple-acc eat-ger Paul-top candy-acc ‘Sally ate an apple and Paul a candy.’
[Japanese]
Little research has been conducted on gapping in sign languages. Liddell (1980) observes that gapping exists in ASL and a head nod to accompany the remnant object NP is necessary, as shown in (21), which lists a number of subject-object pairs. A reanalysis of this example shows that the constraint on gapping mentioned above also applies: (21) displays an SVO pattern, hence forward gapping is expected (Liddell 1980, 31). hn
(21)
have wonderful picnic. hn
pro.1 bring salad, john beer
[ASL]
hn
sandy chicken, ted hamburger ‘We had a wonderful picnic. I brought the salad, John (brought) the beer, Sandy (brought) the chicken and Ted (brought) the hamburger.’ Forward gapping for SVO sentences is also observed in HKSL, as shown in (22a). While head nod occurs on the object of the gapped verb in ASL, HKSL involves an additional forward body lean (bl). However, it seems that gapping in HKSL interacts not only with word order, but also with verb types, in the sense that plain verbs but not agreeing or classifier verbs allow gapping; compare (22a) with (22b) and (22c). bl forwardChn
(22)
a. tomorrow picnic, ix1 bring chicken wing, bl forwardChn
pippen sandwiches, bl forwardChn
connie chocolate
bl forwardChn
kenny cola,
[HKSL]
348
III. Syntax ‘(We) will have a picnic tomorrow. I will bring chicken wings, Pippen (brings) sandwiches, Kenny (brings) cola, (and) Connie (brings) chocolate.’ b. *kenny 0scold3 brenda, pippen Ø connie ‘Kenny scolds Brenda (and) Pippen Ø Connie.’ c. *ix1 head wall Ø, brenda head window cl:head-bang-against-flat-surface ‘I banged my head against the wall and Brenda against the window.’ One possible explanation why HKSL disallows agreeing and classifier verbs to be gapped in coordinate structures is that these verbs express grammatical relations of their arguments through space. In sign languages, the path and the spatial loci encode grammatical relations between the subject and the object (see chapter 7, Verb Agreement, for discussion). Thus, gapping the spatially marked agreeing verb scold (22b) or the classifier predicate cl:head-bang-against-flat-surface (22c) results in the violation of constraints of identification. We assume that the gapped element lacks phonetic content but needs to be interpreted, where syntactic derivations feed the interpretive components. However, contrary to English, where agreement effects can be voided in identification (Wilder 1997), agreement effects, such as overt spatial locative or person marking, are obligatory in HKSL, or probably in sign languages in general. Otherwise, the ‘gapped verb’ will result in the failure of identifying the spatial loci for which the referents or their associated person features are necessarily encoded. This leads not only to ambiguity of referents, but also to ungrammaticality of the structure. Note that word order is not an issue here; even if classifier predicates in HKSL normally yield a SOV order and one should expect backward gapping, (22b) and (22c) show that both forward and backward gapping are unacceptable so far as agreeing and classifier verbs are concerned. In fact, it has been observed in ASL that verb types in sign languages yield differences in grammatical operations. Lillo-Martin (1986, 1991) found that topicalizing an object of a plain verb in ASL requires a resumptive pronoun while it can be null in the case of agreeing verbs (see section 3.1.2). The analysis of the constraints on gapping and topicalization in HKSL opens up a new avenue of research for testing modality effects in syntactic structure.
2.2.3. Scope of yes/no-questions and negation Scope of yes/no-questions and negation is another diagnostic of coordination. Manual operators like the negator and the Q-morpheme in HKSL can scope over the coordinate structure, as in (23a) and (23b) (re = raised eyebrows). (23)
a. pippen brenda they-both go horse-betting. hnCbt left
hnCbt backward right
[HKSL] re
brenda win, pippen lose, right-wrong? Lit. ‘Pippen and Brenda both went horse-betting. Did Brenda win and Pippen lose?’ b. teacher play speedboat eat cow^cl:cut-with-fork-and-knife not-have ‘The teacher did not ride the speedboat and did not eat beef steak.’
16. Coordination and subordination
349
(23a) offers a further example of adversative coordination in HKSL with both conjuncts being scoped over by the clause-final Q-morpheme right-wrong accompanied by brow-raise. In fact, the question requires both conjuncts to be true for the question to receive an affirmative answer; if one of the conjuncts is false or both are false, the answer will be negative. In (23b), the negator not-have scopes over both conjuncts. The fact that an element takes scope over the conjuncts in ATB fashion is similar to the Cantonese example in (2) above, where the two VP conjuncts coordinated by the conjunction tong (‘and’) are scoped over by the temporal adverbial kum-maan (‘last night’), and marked by the same perfective marker -zo. Where a non-manual operator is used, some data from ASL and HKSL indicate that it is possible to have just one conjunct under the scope of negation. In the ASL example (24a), the non-manual negation (i.e., headshake) only scopes over the first conjunct but not the second, which has a head nod instead (Padden 1988, 90). In the HKSL example (24b), the first conjunct is affirmative, as indicated by the occurrence of small but repeated head nods, but the second conjunct is negative and ends with the sentential negator not, which is accompanied by a set of various non-manual markers (i.e., head tilted backward, headshake, and pursed lips). Note that both (24a) and (24b) concern adversative conjunction but not conjunctive coordination. In HKSL, the nonmanual marking has to scope over both conjuncts in conjunctive coordination; scoping over just one conjunct, as in (24c), leads to ungrammaticality. In other words, scope of yes/no-questions or negation is a better diagnostic for conjunctive coordination than for other types of coordination. As our informants suggest, (24b) behaves more like a juxtaposition of two independent clauses, hence failing to serve as a good diagnostic for coordinate structures (n = negative headshake). n
(24)
hn
a. iindex telephone, jindex mail letter ‘I didn’t telephone but she sent a letter.’ hnCCC
[ASL]
ht backwardChsCpursed lips
b. felix come gladys come not ‘Felix will come (but) Gladys will not come.’
[HKSL]
yn
c. *felix come gladys go Lit. ‘*Will Felix come? (and) Gladys will leave.’
[HKSL]
In this section, we have summarized the findings on coordination in sign languages reported so far; specifically, we have examined the constraints involved in wh-extraction, gapping, and the scope of some morphosyntactic devices for yes/no-questions and negation over the coordinate structure. We found that topicalization observes the CSC and ATB movement more than wh-question formation in these languages. As for gapping, we suggest that verb types in sign languages may have an effect on gapping. Lastly, using the scope properties of the Q-morpheme in yes/no-questions and the negator not in conjunctive coordination allows us to identify the constraints on coordinate structures. As we have shown, using negation in disjunctive coordination may lead to different syntactic behaviors. As for the use of non-manuals, we suggest that head nods and body turns are crucial cues for the different types of coordination if no
350
III. Syntax manual conjunctions are present. In the following section, we will explore another process of clause combining ⫺ subordination ⫺ which typically results in asymmetrical structure.
3. Subordination Compared with coordination, subordination has received relatively more attention in sign language research. Thompson’s (1977) claim that ASL does not have grammatical means for subordination has sparked off a quest for tests for syntactic dependencies. Subsequent research on ASL has convincingly shown that looking for manual markers of subordination misses the point because certain subordinate structures are marked only non-manually (Liddell 1980). Padden (1988) also suggests some syntactic diagnostics for embedded sentential complements in ASL, namely subject pronoun copies for matrix subjects, spread of non-manual negation into subordinate but not coordinate structures, as well as wh-extraction from the embedded clauses. However, subsequent research on NGT yield different results (van Gijn 2004). In this section, we will first focus on sentential complements and their associated diagnostics (section 3.1). Typologically, sentential complements are situated towards the higher end of clause integration with complementizers as formal morphosyntactic devices to mark the grammatical relations. Where these devices are absent in sign languages, we argue that the spread of non-manuals might offer a clue to syntactic dependencies, similar to the observations in coordinate structures. In section 3.2, we turn our attention to relative clauses, that is, embedding within DP, and provide a typological sketch of relativization strategies in different sign languages. Note that, due to space limitations, we will not discuss adverbial clauses in this chapter (see Coulter (1979) and Wilbur/Patschke (1999) for ASL; Dachkovsky (2008) for Israeli Sign Language).
3.1. Sentential complements Sentential complements function as subject or object arguments subcategorized for usually by a verb, a noun, or an adjective. In Generative Grammar, sentential complements are usually analyzed as CPs. Depending on the features of the head, the embedded clause may be finite or non-finite, the force may be interrogative or declarative. Typologically, not all languages have overt complementizers to mark sentential complements. Complementizers derive historically from pronouns, conjunctions, adpositions or case markers, and rarely verbs (Noonan 1985). Cantonese has no complementizers for both declarative and interrogative complement clauses, as exemplified in (25a) and (25b). The default force of the embedded clause is usually declarative unless the matrix verb subcategorizes for an embedded interrogative signaled by an ‘A-not-A’ construction like sik-m-sik (‘eat-not-eat’) in (25b), which is a type of yes/no-questions (int = intensifier).
16. Coordination and subordination (25)
351
a. ngo3 lam2 CP[ Ø TP [tiu3 fu3 taai3 song1]TP ]CP pro-1 think cl pants int loose ‘I think the pants are too loose.’ b. ngo3 man4 CP[ Ø TP [keoi3 sik6-m4-sik6 faan6]TP ]CP pro-1 ask pro-3 eat-not-eat rice ‘I ask if he eats rice.’
[Cantonese]
In English, null complementizers are sometimes allowed in sentential complements; compare (26a) with (26b). However, a complementizer is required when the force is interrogative, as the ungrammaticality of (26c) shows. (26)
a. Kenny thinks CP[ Ø TP[Brenda likes Connie]TP ]CP. b. Kenny thinks CP[ that TP[Brenda likes Connie]TP ]CP. c. *Kenny asks CP[ Ø TP[Brenda likes Connie]TP ]CP.
Null complementizers have been reported for many sign languages. Without an overt manual marker, it is difficult to distinguish coordinate from subordinate structures at the surface level. Where subordinate structures are identified, we assume that the complementizer is not spelled out phonetically and the default force is declarative, as shown in (27a⫺d) for ASL (Padden 1988, 85), NGT (van Gijn 2004, 36), and HKSL (see Herrmann (2006) for Irish Sign Language and Johnston/Schembri (2006) for Auslan). (27)
a. 1index hope iindex come visit will ‘I hope he will come to visit.’ b. pointsigner know pointaddressee addresseecomesigner ‘I know that you are coming to (see) me.’ c. ix1 hope willy next month fly-back hk ‘I hope Willy will fly back to Hong Kong next month.’
[ASL] [NGT] [HKSL] yn
d.
rightasksigner rightattract-attentionsigner
ixaddressee want coffee ‘He/she asks me: “Do you want any coffee?”’
[NGT]
Van Gijn (2004) observes that there is a serial verb in NGT, roepen (‘to attract attention’), which may potentially be developing into a complementizer. roepen (glossed here as attract-attention) occasionally follows utterance verbs like ask to introduce a ‘direct speech complement’, as in (27d) (van Gijn 2004, 37). As mentioned above, various diagnostics have been suggested as tests of subordination in ASL. Some of these diagnostics involve general constraints of natural languages. In the following section, we will summarize research that examines these issues.
3.1.1. Subject pronoun copy In ASL, a subject pronoun copy may occur in the clause-final position without a pause preceding it. The copy is either coreferential with the subject of a simple clause (28a) or the subject of a matrix clause (28b) but not with the subject of an embedded clause.
352
III. Syntax Padden (1988) suggests that a subject pronoun copy is an indicator of syntactic dependency between a matrix and a subordinate clause. It also distinguishes subordinate from coordinate structures because a clause-final pronoun copy can only be coreferential with the subject of the second conjunct, not the first, when the subject is not shared between the conjuncts. Therefore, (28c) is ungrammatical because the pronoun copy is coreferential with the (covert) subject of the first conjunct (Padden 1988, 86⫺88). (28)
go-away 1index ‘I’m going, for sure (I am).’ b. 1index decide iindex should idrivej see children 1index ‘I decided he ought to drive over to see his children, I did.’ c. *1hiti, iindex tattle mother 1index ‘I hit him and he told his mother, I did.’ a.
1index
[ASL]
It turns out, however, that this test of subordination cannot be applied to NGT and HKSL. An example similar to (28b) is ungrammatical in NGT, as shown in (29a): the subject marijke in the matrix clause cannot license the sentence-final copy pointright. As illustrated in (29b), the copy, if it occurs, appears at the end of the matrix clause (i.e., after know in this case), not the embedded clause (van Gijn 2004, 94). HKSL also displays different coreference properties with clause-final pronoun copies. If a final index sign does occur, the direction of pointing determines which grammatical subject it is coreferential with. An upward pointing sign (i.e., ixai), as in (29c), assigns the pronoun to the matrix subject only. Note that the referent gladys, which is the matrix subject, is not present in the signing discourse, the upward pointing obviates locus assignment. Under these circumstances, (29d) is ungrammatical when the upward pointing pronoun ixaj is coreferential with the embedded subject pippen. On the other hand, the pronoun ixbj in (29e) that points towards a locus in space refers to the embedded subject pippen. (29)
a. *marijke pointright know inge pointleft leftcomesigner pointright ‘Marijke knows that Inge comes to me.’ b. inge pointright know pointright pointsigner italy signergo.toneu.space ‘Inge knows that I am going to Italy.’ c. gladysi suspect pippen steal car ixai ‘Gladys suspected Pippen stole the car, she did.’ d. *gladysi suspect pippenj steal car ixaj Lit. ‘Gladys suspected Pippen stole the car, he did.’ e. gladysi suspect pippenj steal car ixbj ‘Gladys suspected Pippen stole the car, he did.’
[NGT] [NGT] [HKSL] [HKSL] [HKSL]
It is still unclear why the nature of pointing, that is, the difference between pointing to an intended locus like ‘bj’ in (29e) for the embedded subject versus an unintended locus like ‘ai’ in (29c) for the matrix subject, leads to a difference in coreference in HKSL. The former could be a result of modality because of the fact that the referent is physically present in the discourse constrains the direction of pointing of the index sign. This finding lends support to the claim that those clause-final index signs without an intended locus refer to the matrix subject in HKSL. In sum, it appears that subject
16. Coordination and subordination
353
pronoun copy cannot be adopted as a general test of subordination in sign languages. Rather, this test seems to be language-specific because it works in ASL but not in NGT and HKSL.
3.1.2. Wh-extraction The second test for subordination has to do with constraints on wh-extraction. In section 2.2.1, we pointed out that extraction out of a conjunct of a coordinate structure is generally not permitted unless the rule is applied in ATB fashion. In fact, Ross (1967) also posits constraints on extraction out of wh-islands (30a⫺c). This constraint has been attested in many spoken languages, offering evidence that long-distance whmovement is successively cyclic, targeting SpecCP at each clause boundary. (30)
a. Whoi do you think Mary will invite ti? b. *Whoi do you think what Mary did to ti? c. *Whoi do you wonder why Tom hates ti?
(30b) and (30c) have been argued to be ungrammatical because the intermediate whclause is a syntactic island in English and further movement of a wh-constituent out of it is barred. Typological studies on wh-questions in sign languages found three syntactic positions for wh-expressions: in-situ, clause-initial, or clause-final (Zeshan 2004). In ASL, although the wh-expressions in simple wh-questions may occupy any of the three syntactic positions (see chapter 14 on the corresponding debate on this issue), they are consistently clause-initial in the intermediate SpecCP position for both argument and adjunct questions (Petronio/Lillo-Martin 1997). In other words, this constitutes evidence for embedded wh-questions in ASL. In HKSL, the wh-expression of direct argument questions is either in-situ or clause-final, and that of adjunct questions is consistently clause-final. However, in embedded questions, the wh-expressions are consistently clause-final, as in (31a) and (31b), and this applies to both argument and adjunct questions. (31)
a. father wonder help kenny who ‘Father wondered who helped Kenny.’ b. kenny wonder gladys cook crab how ‘Kenny wondered how Gladys cooked the crabs.’
[HKSL]
Constraints on extraction out of embedded clauses have been examined. In NGT, extraction is possible only with some complement taking predicates, such as ‘to want’ (32a) and ‘to see’, but impossible with ‘to believe’ (32b) and ‘to ask’ (van Gijn 2004, 144 f.). wh
(32)
a. who boy pointright want rightvisitleft twho ‘Who does the boy want to visit?’
[NGT]
354
III. Syntax wh
b. *who inge believe twho signervisitleft ‘Who does Inge believe visits him?’ Lillo-Martin (1986, 1992) claims that embedded wh-questions are islands in ASL; hence, extraction is highly constrained. Therefore, the topic in (33) is base-generated and a resumptive pronoun (i.e., apronoun) is required. t
(33)
amother, 1pronoun
don’t-know “what” *(apronoun) like ‘Mother, I don’t know what she likes.’
[ASL]
HKSL behaves similarly. (34a) illustrates that topicalizing the object from an embedded wh-question also leads to ungrammaticality. In fact, this operation cannot even be saved by a resumptive pronoun (34b); neither can it be saved by signing buy at the locus of the nominal sofa in space (34c). It seems that embedded adjunct questions are strong islands in HKSL and extraction is highly constrained. Our informants only accepted in-situ wh-morphemes, as shown in (34d). (34)
a. *ixi sofa, ix1 wonder dad buy ti where b. *ixi sofa, ix1 wonder dad buy ixi where c. *ixi sofa, ix1 wonder dad buyi where ‘As for that sofa, I wonder where dad bought it.’ d. ix1 wonder dad buy ixi sofa where ‘I wonder where dad bought the sofa.’
[HKSL]
The results from wh-extraction are more consistent among the sign languages studied, suggesting that the island constraints are modality-independent. HKSL seems to be more constrained than ASL because HKSL does not allow wh-extraction at all out of embedded wh-adjunct questions while in ASL, resumptive pronouns or locative agreement can circumvent the violation. It may be that agreeing verbs involving space for person features satisfy the condition of identification of null elements in the ASL grammar. In the next section, we will examine non-manuals as diagnostics for subordination.
3.1.3. Spread of non-manuals in sentential complementation In contrast to coordinate structures, non-manuals may spread from the matrix to the embedded clause, demonstrating that the clausal structure of coordination differs from that of subordination. This is shown by the ASL examples in (35) for non-manual negation (Padden 1988, 89) and yes/no-question non-manuals (Liddell 1980, 124). This could be due to the fact that pauses are not necessary between the matrix and embedded clauses, unlike coordination, where a pause is normally observed between the conjuncts (Liddell 1980; n = non-manuals for negation).
16. Coordination and subordination
355 n
(35)
a. 1index want jindex go-away ‘I didn’t want him to leave.’
[ASL]
yn
b. remember dog chase cat ‘Do you remember that the dog chased the cat?’ However, the spread of non-manual negation as observed in ASL turns out not to be a reliable diagnostic for subordination in NGT and HKSL. The examples in (36) illustrate that in NGT, the non-manuals may (36a) or may not (36b) spread onto the embedded clause (van Gijn 2004, 113, 119). neg
(36)
a. pointsigner want pointaddressee neu spacecome-alongsigner ‘I do not want you to come along.’
[NGT]
neg
b. inge believe pointright pointsigner signervisitleft marijke ‘Inge does not believe that I visit Marijke.’ HKSL does not systematically use non-manual negation like headshake as a grammatical marker. However, in HKSL, the scope of negation may offer evidence for subordination. In some cases, it interacts with body leans. In (37a), the sign not occurring at the end of the embedded clause generally scopes over the embedded clause but not the matrix clause. Therefore, the second reading is not acceptable to the signers. To negate the matrix clause, signers prefer to extrapose the embedded clause by means of topicalization, as in (37b). Body leans are another way to mark the hierarchical structure of matrix negation. In (37c), the clause-final negator not scopes over the matrix but not the subordinate clause. (37c) differs from (37a) in the adoption of topicalization of the entire sentence with forward body lean, followed by a backward body lean and a manual sign not, signaling matrix negation. (37)
a. gladys think willy come-back not i. ‘Gladys thinks Willy will not come back.’ ii. *‘Gladys does not think Willy will come back.’
[HKSL]
top
b. willy come-backi, gladys say ti not-have ‘As for Willy’s coming back, Gladys did not say so.’ bl forward top
bl back
c. gladys want willy come-back hk not ‘As for Gladys wanting Willy to come back to Hong Kong, it is not the case.’ Branchini et al. (2007) also observe that where the basic word order is SOV in Italian Sign Language (LIS), subordinate clauses are always extraposed either to the left periphery (38a) or to the right periphery (38b). They argue that subordinate clauses do not occur in their base position preceding the verb (38c) but rather extraposed to the periphery to avoid the processing load of centre embedding (te = tensed eyes).
356
III. Syntax te
(38)
a. [maria house buy] paolo want
[LIS]
te
b. paolo want [maria house buy] c. *paolo [maria house buy] want ‘Paolo wants Maria to buy a house.’ It could be that different sign languages rely on different grammatical processes as tests of subordination. In HKSL, another plausible diagnostic is the spread of a nonmanual associated with the verb in the matrix clause. For verbs like believe, guess, and want, which take object complement clauses, we observe pursed lips as a lexical non-manual. In (39a) and (39b), a pause is not observed at the clause boundary; the lips are pursed and the head tilts sideward for the verb in the matrix clause, and these non-manuals spread till the end of the complement clause, followed by a head nod, suggesting that the verb together with its complement clause forms a constituent of some kind. (39)
a. male house look-outi, sky cl: thick-cloud-hover-above
[HKSL]
pursed lipsC hn
male guess tomorrow rain ‘The man looks out (of the window) and sees thick clouds hovering in the sky above. The man guesses it will rain tomorrow.’ pursed lipsC hn
b. ix1 look-at dress pretty; want buy give brenda ‘I saw a pretty dress; I want to buy it and give it to Brenda.’ The same phenomenon is observed in indirect yes/no-questions subcategorized for by the verb wonder. In this context, we observe the spread of pursed lips and browraise from the verb onto the indirect yes/no-question and brow-raise peaks at the sign expensive in (40). Thus these non-manuals suggest that it is an embedded yes/no-question.
(40)
yn ix1 wonder ixdet car expensive ‘I wonder if this car is expensive.’
[HKSL]
One may wonder whether these lexical non-manuals stemming from the verbs have any grammatical import. In the literature, certain non-manuals like headshake and eye gaze have been suggested to be the overt realization of formal grammatical features residing in functional heads. Assuming that there is a division of labor between nonmanuals at different linguistic levels (Pfau/Quer 2010), what we observe here is that lexical non-manuals associated with the verb spread over a CP domain that the verb subcategorizes for. It could be that these non-manuals bear certain semantic functions. In this case, verbs like guess, want, and wonder denote mental states; semantically, the proposition encoded in the embedded clause is scoped over by these verbs, and thus the lexical non-manuals scope over these propositions. In this section, we have examined to what extent the spread of non-manuals over embedded clauses provides evidence of subordination. Matrix yes/no-questions appear
16. Coordination and subordination to invoke a consistent spread of non-manuals over the embedded clauses across sign languages. However, patterns are less consistent with respect to non-manual negation: in complex sentences, sign languages like ASL, NGT, and HKSL show different spreading behaviors for the negative headshake. HKSL instead makes use of scope of negation, which offers indirect evidence for embedded clauses in HKSL. We also observe that non-manuals associated with lexical verbs spread into embedded clauses, offering evidence for sentential complementation. It seems that if non-manuals do spread, they start from the matrix verb and spread to the end of the embedded clause. Therefore, in order to use the spread of non-manuals as diagnostics, a prerequisite is to confirm if the sign language in question uses them. As we have seen, NGT and HKSL do not use spread of headshake while ASL does.
3.2. Relative clauses Relative clauses (RCs) have been widely studied in spoken languages, and typological analyses centre around structural properties such as whether the RCs (i) are head external or internal, (ii) postnominal or prenominal, (iii) restrictive or non-restrictive, (iv) employ relative markers such as relative pronouns, personal pronouns, resumptive pronouns, etc., and (v) their position within a sentence (Keenan 1985; Lehmann 1986). In sign languages, an additional analysis concerns the use of non-manuals in marking RCs. Typologically, Dryer (1992) found a much higher tendency of occurrence for postnominal than prenominal RCs: in his sample, 98 % of VO languages and 58 % of OV languages have postnominal RCs. Externally and internally headed relative clauses (EHRCs vs. IHRCs) in languages are analyzed as complex NPs while correlatives are subordinating sentences (Basilica 1996; de Vries 2002). Clear cases of IHRCs are observed in SOV languages and they may co-occur with prenominal EHRCs (Keenan 1985, 163). To date, investigations into relativization strategies in sign languages have been conducted primarily on ASL, LIS, and DGS. In this section, we will add some preliminary observations from HKSL. We will first focus on the type and position of the RCs and the use of non-manuals (section 3.2.1), before turning to the use of relative markers (section 3.2.2). The discussion, which only addresses restrictive RCs, will demonstrate that the strategies for relativization in sign languages vary cross-linguistically, similarly to spoken languages.
3.2.1. Types of relative clauses To date, various types of RCs have been reported for a number of sign languages, except for prenominal RCs. Liddell (1978, 1980) argues that ASL displays both IHRCs (41a) and postnominal ERHCs (41b) (Liddell 1980, 162). According to Liddell, there are two ways to distinguish EHRCs and IHRCs in ASL. First, in (41a), the non-manual marker for relativization extends over the head noun dog, indicating that the head noun is part of the RC, while in (41b), dog is outside the domain of the non-manual marker. Second, in (41a), the temporal adverbial preceding the head noun scopes over
357
358
III. Syntax the verb of the RC, and if the adverbial is part of the RC, then the head noun following it cannot be outside the RC (rel = non-manuals for relatives). rel
(41)
a. recently dog chase cat come home ‘The dog which recently chased the cat came home.’
[ASL]
rel
b. 1ask3 give1 dog [[ursula kick]S thatc ]]NP ‘I asked him to give me the dog that Ursula kicked.’ As for non-manual marking, brow raise has been found to commonly mark relativization. Other (language-specific) non-manuals reported in the literature include backward head tilt and raised upper lips for ASL, a slight body lean towards the location of the relative pronoun for DGS, and tensed eyes and pursed lips for LIS. According to Pfau and Steinbach (2005), DGS employs postnominal EHRCs, which are introduced by a relative pronoun (rpro; see 3.2.2 for further discussion). In (42), the non-manual marker accompanies only the pronoun. The adverbial preceding the head noun is outside the non-manual marker and scopes over the matrix clause verb arrive (Pfau/Steinbach 2005, 513). Optionally, the RC can be extraposed to the right, such that it appears sentence-finally. re
(42)
yesterday [man ix3 [rpro-h3 cat stroke]CP ]DP arrive ‘The man who is stroking the cat arrived yesterday.’
[DGS]
The status of RCs in LIS is less clear, as there are two competing analyses. Branchini and Donati (2009) suggest that LIS has IHRCs marked by a clause-final determiner, which, based on accompanying mouthing, they gloss as pe (43a). In contrast, Cecchetto, Geraci, and Zucchi (2006) argue that LIS RCs are actually correlatives marked by a demonstrative morpheme glossed as prorel (43b). Note that in (43a), just as in (41a), the non-manual marker extends over the head noun (man) and the adverbial preceding the head noun, which scopes over the RC verb bring. re
(43)
a. today mani pie bring pei yesterday (ixi) dance ‘The man that brought the pie today danced yesterday.’
[LIS]
rel
b. boy icall proreli leave done ‘A boy that called left.’ Wilbur and Patschke (1999) propose that brow raise marks constituents that underwent A’-movement to SpecCP. Following Neidle et al. (2000), Pfau and Steinbach (2005) argue that brow raise realizes a formal grammatical feature residing in a functional head. Brow raise identifies the domain for the checking of the formal features of the operator. A relative pronoun has two functions: it is an A’-operator bearing wh-features or it is a referring/demonstrative element bearing d-features (Bennis 2001). In
16. Coordination and subordination
359
ASL, where there is no overt operator, brow raise spreads over the entire IHRC (41a). In DGS, it usually co-occurs with only the relative pronoun (42), but optionally, it may spread onto the entire RC, similar to (41b). For LIS, different observations have been reported. Branchini and Donati (2009) argue that brow raise spreads over the entire RC, as in (43a), but occasionally, it accompanies the pe-sign only. In contrast, Cecchetto, Geraci, and Zucchi (2006) report that brow raise is usually restricted to the clause-final sign prorel, but may spread onto the verb that precedes it (43b). HKSL displays IHRCs. In (44), brow raise scopes over the head noun male and the RC. Clearly, the RC occupies an argument position in this sentence. The head noun is the object of the matrix verb like but the subject of the verb eat within the RC. rel
(44)
hey! ix3 like [ixi male eat chips ixi] ‘Hey! She likes the man who is eating chips.’
[HKSL]
Liddell (1980) claims that there is a tendency for IHRCs to occur clause-initially in ASL. The clause in question in LIS shows a similar distribution (Branchini/Donati 2009; Cecchetto/Geraci/Zucchi 2006). (45a) shows that in HKSL, where the basic word order is SVO (Sze 2003), the RC (ixa boy run) is topicalized to a left peripheral position; a boundary blink is observed at the right edge of the RC, followed by the head tilting backward when the main clause is signed. The fact that brow raise also marks topicalized constituents in HKSL makes it difficult to tease apart the grammatical function of brow raise between relativization and topicalization in this example. This is even more so in (45b), where the topicalized RC is under the scope of the yes/ no-question. rel/top
(45)
a. ixa boy run ix1 know ‘The boy that is running, I know (him).’ rel/top
[HKSL] y/n
b. female ixa cycle clothes orange ixa help1 introduce1 good? ‘As for the lady that is cycling and in orange clothes, will you help introduce (her) to me?’ As mentioned, the second diagnostic for RCs is the scope of temporal adverbials. In ASL and LIS, the temporal adverbial preceding the head noun scopes over the RC containing the head noun but not the main clause (41a and 43a). In DGS, which displays postnominal RCs, however, the temporal adverbial scopes over the main clause but not the RC (42). In HKSL, just as in ASL/LIS, a temporal adverbial preceding the head noun scopes over the RC that contains the head noun (46a). Consequently, (46b) is unacceptable if tomorrow, which falls under the RC non-manuals, is interpreted as scoping over the main clause. According to our informants, minus the non-manuals, (46b) would at best yield a coordinate structure which contains two conjoined VPs that are both scoped over by the temporal adverbial tomorrow. In order to scope over the main clause, the temporal adverbial has to follow the RC and precede the main clause, as in (46c) (cf. the position of yesterday in (43a)).
360
III. Syntax rel
(46)
a. yesterday ixa female cycle ix1 letter senda ‘I sent a letter to that lady who cycled yesterday’
[HKSL]
rel
b. tomorrow ixa female buy car fly-to-beijing *‘The lady who is buying the car will fly to Beijing tomorrow.’ ?? ‘Tomorrow that lady will buy a car and fly to Beijing.’ rel
c. ixa female cycle (ixa) tomorrow fly-to-beijing ‘The lady who is cycling will fly to Beijing tomorrow.’
3.2.2. Markers for relativization According to Keenan (1985), EHRCs may involve a personal pronoun (e.g., Hebrew), a relative pronoun (e.g., English), both (e.g., Modern Greek), or none (e.g., English, Hebrew). Relative pronouns are pronominal elements that are morphologically similar to demonstrative or interrogative pronouns. They occur either at the end of the RC, or before or after the head noun. As for IHRCs, they are not generally marked morphologically, hence leading to ambiguity if the RC contains more than one NP. However, the entire clause may be nominalized and be marked by a determiner (e.g., Tibetan) or some definiteness marker (e.g., Diegueño). Correlatives, on the other hand, are consistently morphologically marked for their status as subordinate clauses and the marker is coreferential with a NP in the main clause. There have been discussions about the morphological markers attested in relativization in sign languages. In ASL, there are a few forms of that, to which Liddell (1980) has ascribed different grammatical status. First, ASL has the sign thata, which Liddell termed ‘relative conjunction’ (47a). This sign normally marks the head noun in an IHRC (Liddell 1980, 149 f.). There is another sign thatb which occurs at the end of a RC and which is usually articulated with intensification (47b). Based on the scope of the non-manuals, thatc in (47b) does not belong to the RC domain. Liddell argues that thatc is a complementizer and that it is accompanied by a head nod (Liddell 1980, 150). re
(47)
a. recently dog thata chase cat come home. ‘The dog which recently chased the cat came home.’
[ASL]
re
b. ix1 feed dog bite cat thatb thatc ‘I fed the dog that bit the cat.’ In (42), we have already seen that DGS makes use of a relative pronoun. This pronoun agrees with the head noun in the feature [Ghuman] and comes in two forms: the one used with human referents (i.e., rpro-h) adopts the classifier handshape for humans; the one referring to non-human entities is similar to the regular index signs (i.e., rpronh). These forms are analyzed as outputs of grammaticalization of an indexical deter-
16. Coordination and subordination miner sign. The presence of relative pronouns in DGS is in line with the observation of Keenan (1985) that relative pronouns are typical of postnominal EHRCs. In LIS, different grammatical status has been ascribed to the indexical sign that consistently occurs at the end of the RC. Cecchetto, Geraci, and Zucchi (2006) analyze it as a demonstrative morpheme glossed as prorel. However, according to Branchini and Donati (2009), pe is not a wh- or relative pronoun; rather it is a determiner for the nominalized RC. In the IHRCs of HKSL, the clause-final index sign may be omitted if the entire clause is marked by appropriate non-manuals, as in (46c). If the index sign occurs, it is coreferential with the head noun within the RC and spatially agrees with it. The index sign is also identical in its manual form to the index sign that is adjacent to the head noun, suggesting that it is more like a determiner than a relative pronoun. However, this clause-final index sign is accompanied by a different set of non-manuals ⫺ mouth-open and eye contact with the addressee. In sum, data from HKSL, ASL, and LIS show that head internal relatives require brow raise to spread over the RCs including the head noun. As for IHRCs, HKSL patterns with the LIS relatives studied by Branchini et al. (2007) in the occurrence of a clause-final indexical sign which phonetically looks like a determiner, the presence of which is probably motivated by the nominal status of the RC. Also, the presence of a relative pronoun as observed in DGS offers crucial evidence for the existence of RCs in that language. In other sign languages, which do not consistently employ such devices, non-manual markers and/or the behavior of temporal adverbials may serve as evidence for RCs.
4. Conclusion In this paper, we have summarized attempts to identify coordinate and subordinate structures in sign languages. We found that one cannot always rely on morphosyntactic devices for the identification and differentiation of coordination and subordination because these devices do not usually show up in the sign languages surveyed so far. Instead, we adopted general diagnostics of grammatical dependency defined in terms of constraints on grammatical operations on these structures. The discussion revealed that the island constraint involved in wh-extraction is consistently observed in sign languages, too, while other constraints (e.g., gapping in coordinate structures) appear to be subject to modality effects. We have also examined the behavior of non-manuals which we hypothesize will offer important clues to differentiate these structures. Spreading patterns, for instance, allow us to analyze verb complementation, embedded negation and yes/no-questions, and relativization strategies. As for the latter, we have shown that sign languages show typological variation similar to that described for spoken languages. For future research, we suggest more systematic categorization of nonmanuals, which we hope will allow us to delineate their functions at different syntactic levels.
361
362
III. Syntax
5. Literature Basilica, David 1996 Head Position and Internally Headed Relative Clauses. In: Language 72, 498⫺531. Bennis, Hans 2001 Alweer Wat Voor (een). In: Dongelmans, Berry/Lallerman, Josien/Praamstra, Olf (eds.), Kerven in een Rots. Leiden: SNL, 29⫺37. Branchini, Chiara/Donati, Caterina 2009 Relatively Different: Italian Sign Language Relative Clauses in a Typological Perspective. In Lipták, Anikó (ed.), Correlatives Cross-Linguistically. Amsterdam: Benjamins, 157⫺191. Branchini, Chiara/Donati, Caterina/Pfau, Roland/Steinbach, Markus 2007 A Typological Perspective on Relative Clauses in Sign Languages. Paper Presented at the 7 th Conference of the Association for Linguistic Typology (ALT 7), Paris, September 2007. Cecchetto, Carol/Geraci, Carlo/Zucchi Sandro 2006 Strategies of Relativization in Italian Sign Language. In: Natural Language and Linguistic Theory. 24(4), 945⫺957. Coulter, Geoffrey R. 1979 American Sign Language Typology. PhD Dissertation, University of California, San Diego. Dachkovsky, Svetlana 2008 Facial Expression as Intonation in Israeli Sign Language: The Case of Neutral and Counterfactual Conditionals. In: Quer, Josep (ed.), Signs of the Time. Selected Papers from TISLR 2004. Hamburg: Signum, 61⫺82. Dryer, Matthews S. 1992 The Greenbergian Word Order Correlations. In: Language, 68, 81⫺138. Edmonds, Joseph E. 1976 A Transformational Approach to English Syntax. New York: Academic Press. Gijn, Ingeborg van 2004 The Quest for Syntactic Dependency. Sequential Complementation in Sign Language of Netherlands. PhD Dissertation, University of Amsterdam. Haspelmath, Martin 2004 Coordinating Constructions: An Overview. In: Haspelmath, Martin (ed.), Coordinating Constructions. Amsterdam: Benjamins, 3⫺39. Herrmann, Annika 2007 The Expression of Modal Meaning in German Sign Language and Irish Sign Language. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus. (eds.), Visible Variation. Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 245⫺278. Herrmann, Annika 2010 The Interaction of Eye Blinks and Other Prosodic Cues in German Sign Language. In: Sign Language & Linguistics 13(1), 3⫺39. Johnston, Trevor/Schembri, Adam 2007 Australian Sign Language: An Introduction to Sign Language Linguistics. Cambridge: Cambridge University Press. Keenan, Edward, L 1985 Relative Clauses. In: Shopen, Timothy (ed.), Language Typology and Syntactic Description. Vol. 2: Complex Constructions. Cambridge: Cambridge University Press, 141⫺170. Kibrik, Andrej A. 2004 Coordination in Upper Kuskokwim Athabaskan. In: Haspelmath, Martin (ed.), Coordinating Constructions. Amsterdam: Benjamins. 537⫺553.
16. Coordination and subordination Lehmann, Christian 1986 On the Typology of Relative Clauses. In: Linguistics 24, 663⫺680. Lehmann, Christian 1988 Towards a Typology of Clause Linkage. In: Haiman, John/Thompson, Sandra A. (eds.), Clause Combining in Grammar. Amsterdam: Benjamins, 181⫺226. Liddell, Scott 1978 Nonmanual Signals and Relative Clauses in American Sign Language. In: Siple, Patricia (ed.), Understanding Language through Sign Language Research. New York: Academic Press, 59⫺90. Liddell, Scott 1980 American Sign Language Syntax. The Hague: Mouton. Liddell, Scott 2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press. Lillo-Martin, Diane 1986 Two Kinds of Null Arguments in American Sign Language. In: Natural Language and Linguistic Theory 4, 415⫺444. Lillo-Martin, Diane 1991 Universal Grammar and American Sign Language: Setting the Null Argument Parameters. Dordrecht: Kluwer. Lillo-Martin, Diane 1992 Sentences as Islands: On the Boundedness of A’-movement in American Sign Language. In: Goodluck, Helen/Rochemont, Michael (eds.), Island Constraints. Dordrecht: Kluwer, 259⫺274. Mithun, Marianne 1988 The Grammaticalization of Coordination. In: Haiman, John/Thompson, Sandra A. (eds.), Clause Combining in Grammar. Amsterdam: Benjamins, 331⫺359. Neidle, Carol/Kegl, Judy/MacLaughlin, Dawn/Bahan, Benjamin/Lee, Robert G. 2000 The Syntax of American Sign Language: Functional Categories and Hierarchical Structure. Cambridge, MA: MIT Press. Nespor, Marina/Vogel, Irene 1986 Prosodic Phonology. Berlin: Mouton de Gruyter. Noonan, Michael 2005 Complementation. In: Shopen, Timothy (ed.), Language Typology and Syntactic Descriptions. Vol. 2: Complex Constructions. Cambridge: Cambridge University Press, 42⫺138. Padden, Carol 1988 Interaction of Morphology and Syntax in American Sign Language. New York: Garland. Petronio, Karen/Lillo-Martin, Diane 1997 Wh-movement and the Position of Spec-CP: Evidence from American Sign Language. In: Language, 18⫺57. Pfau, Roland/Quer, Josep 2010 Nonmanuals: Their Grammatical and Prosodic Roles. In: Brentari, Diane (ed.), Sign Languages: A Cambridge Language Survey. Cambridge: Cambridge University Press, 381⫺402. Pfau, Roland/Markus Steinbach 2005 Relative Clauses in German Sign Language: Extraposition and Reconstruction. In Bateman, Leah/Ussery, Cherlon (eds), Proceeding of the North East Linguistic Society (NELS 35). Amherst, MA: GLSA. 507⫺521. Ross, John R. 1967 Constraints on Variables in Syntax. PhD Dissertation, MIT [Published 1986 as Infinite Syntax, Norwood, NJ: Ablex].
363
364
III. Syntax Ross, John R. 1970 Gapping and the Order of Constituents. In: Bierwisch, Manfred/Heidolph, Karl Erich (ed.), Progress in Linguistics. The Hague: Mouton, 249⫺259. Sandler, Wendy 1999 The Medium and the Message: Prosodic Interpretation of Linguistic Content in Israeli Sign Language. In: Sign Language & Linguistics 2, 187⫺216. Selkirk, Elizabeth 2005 Comments on Intonational Phrasing in English. In: Frota, Sonia/Vigario, Marina/Freitas, Maria Joao (eds.), Prosodies: With Special Reference to Iberian Languages. Berlin: Mouton de Gruyter, 11⫺58. Sze, Felix 2003 Word Order of Hong Kong Sign Language. In: Baker, Anne/Bogaerde, Beppie van den/ Crasborn, Onno (eds.), Cross-linguistic Perspectives in Sign Language Research. Selected Papers from TISLR 2000. Hamburg: Signum, 163⫺191. Sze, Felix 2008 Blinks and Intonational Phrasing in Hong Kong Sign Language. In: Quer, Josep (ed.), Signs of the Time. Selected Papers from TISLR 2004. Hamburg: Signum, 83⫺107. Tang, Gladys 2006 Negation and Interrogation in Hong Kong Sign Language. In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Signed Languages. Nijmegen: Ishara Press, 198⫺224. Tang, Gladys/Brentari, Diane/González, Carolina/Sze, Felix 2010 Crosslinguistic Variation in the Use of Prosodic Cues: The Case of Blinks. In: Brentari, Diane (ed.), Sign Languages: A Cambridge Language Survey. Cambridge: Cambridge University Press, 519⫺542. Tang, Gladys/Sze, Felix/Lam, Scholastica 2007 Acquisition of Simultaneous Constructions by Deaf Children of Hong Kong Sign Language. In: Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.), Simultaneity in Signed Language: Form and Function. Amsterdam: Benjamins, 283⫺316. Thompson, Henry 1977 The Lack of Subordination in American Sign Language. In: Friedman, Lynn (eds), On the Other Hand: New Perspectives on American Sign Language. New York: Academic Press, 78⫺94. Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.) 2007 Simultaneity in Signed Languages: Form and Function. Amsterdam: Benjamins. Vries, Mark de 2002 The Syntax of Relativization. PhD Dissertation, University of Amsterdam. Utrecht: LOT. Waters, Dafydd/Sutton-Spence, Rachel 2005 Connectives in British Sign Language. In: Deaf Worlds 21(3), 1⫺29. Watson, Richard L. 1966 Clause and Sentence Gradations in Pacoh. In: Lingua 16, 166⫺189. Wilbur, Ronnie B. 1994 Eyeblinks and ASL Phrase Structure. In: Sign Language Studies 84, 221⫺240. Wilbur, Ronnie B./Patschke, Cynthia 1999 Syntactic Correlates of Brow Raise in ASL. In: Sign Language & Linguistics 2(1), 3⫺41. Wilder, Chris 1994 Coordination, ATB, and Ellipsis. In: Groninger Arbeiten zur Generativen Linguistik 37, 291⫺329. Wilder, Chris 1997 Some Properties of Ellipsis in Coordination. In: Alexiadou, Artemis/Hall, T. Alan (eds.), Studies on Universal Grammar and Typological Variation. Amsterdam: Benjamins, 59⫺107.
17. Utterance reports and constructed action Williams, Edwin 1978 Across-the-board Rule Application. In: Linguistic Inquiry 9, 31⫺43. Zeshan, Ulrike 2004 Interrogative Constructions in Signed Languages: Cross-linguistic Perspectives. In: Language 80(1), 7⫺39.
Gladys Tang and Prudence Lau, Hong Kong (China)
17. Utterance reports and constructed action 1. 2. 3. 4. 5. 6. 7.
Reporting the words, thoughts, and actions of others Early approaches to role shift Role shift as constructed action Formal approaches Integration Conclusion Literature
Abstract Signers and speakers have a variety of means to report the words, thoughts, and actions of others. Direct quotation gives (the utterer’s version of) the quoted speaker’s point of view ⫺ but it need not be verbatim, and can be used to report thoughts and actions as well as words. In sign languages, role shift is used in very similar ways. The signer’s body or head position, facial expressions, and gestures contribute to the marking of such reports, which can be considered examples of constructed action. These reports also include specific grammatical changes such as the indexical (shifting) use of first-person forms, which pose challenges for semantic theories. Various proposals to account for these phenomena are summarized, and directions for future research are suggested.
1. Reporting the words, thoughts, and actions of others Language users have a variety of means with which to report the words, thoughts, and actions of others. Indirect quotation (or indirect report), as in example (1a), reports from a neutral, or narrator’s point of view. Direct quotation (or direct report, sometimes simply reported speech), as in (1b), makes the report from the quoted person’s point of view. (1)
Situation: Sam, in London, July 22, announces that she will go to a conference in Bordeaux July 29. Speaker is in Bordeaux July 31.
365
366
III. Syntax a. Indirect discourse description: Sam said that she was coming to a conference here this week. b Direct discourse description: Sam said, “I’ll go to a conference there next week.” There are several important structural differences between the indirect and direct types. In the indirect description, an embedded clause is clearly used, whereas in the direct discourse, the relationship of the quotation to the introducing phrase is arguably not embedding. In addition, the interpretation of indexicals is different in the two types. Indexicals are linguistic elements whose reference is dependent on aspects of the context. For example, the reference of ‘I’ depends on who is speaking at the moment; the interpretation of ‘today’ depends on the time of utterance; etc. In direct discourse, the reference of the indexicals is interpreted relative to the situation of the quoted context. It is often thought that there is another difference between indirect and direct discourse, viz., that direct discourse should be a verbatim replication of the original event, whereas this requirement is not put on indirect discourse. However, this idea has been challenged by a number of authors. Clark and Gerrig (1990) discuss direct quotation and argue that although it “is CONVENTIONALLY implied that the wording [of direct quotation] is verbatim in newspapers, law courts, and literary essays, […] [it is] not elsewhere.” On their account, quotations are demonstrations which depict rather than describe their referents. An important part of this account is that the demonstrator selects some, but not all of the aspects of the report to demonstrate. In addition, they point out that the narrator’s viewpoint can be combined with the quotation through tone of voice, lexical choice, and gestures. Clark and Gerrig (1990, 800) contrast their account with the classical ‘Mention theory’: “The classical account is that a quotation is the mention rather than the use of an expression”. They critique this approach: It has serious deficiencies (see, e.g., Davidson 1984). For us the most obvious is that it makes the verbatim assumption […] [M]ention theory assumes, as Quine 1969 says, that a quotation ‘designates its object not by describing it in terms of other objects, but by picturing it’. ‘When we quote a man’s utterance directly,’ Quine says, ‘we report it almost as we might a bird call. However significant the utterance, direct quotation merely reports the physical incident’ (219). But precisely what it pictures, and how it does so, are problematic or unspecified (Davidson 1984). In particular, it makes no provision for depicting only selected aspects of the ‘physical incident’, nor does it say what sort of thing the act of picturing is.
Tannen (1989, 99⫺101) also criticizes the verbatim approach to direct quotation. She says: Even seemingly ‘direct’ quotation is really ‘constructed dialogue,’ that is, primarily the creation of the speaker rather than the party quoted. […] In the deepest sense, the words have ceased to be those of the speaker to whom they are attributed, having been appropriated by the speaker who is repeating them.
17. Utterance reports and constructed action Tannen also recognizes that what is commonly thought of as ‘direct quotation’ can be used to express not only the (approximate) words of another, but also their thoughts. She points out (Tannen 1989, 115): “Presenting the thoughts of a character other than oneself is a clear example of dialogue that must be seen as constructed, not reported.” Other researchers have investigated ways in which speakers both select aspects of a dialogue to represent, and go beyond representing the actual speaker’s event to add aspects of their own point of view. For example, Günthner (1999, 686) says that a speaker ‘decontextualizes’ speech from its original context and ‘recontextualizes’ it in a new conversational surrounding. In recontextualizing utterances, speakers, however, not only dissolve certain sequences of talk from their original contexts and incorporate them into a new context, they also adapt them to their own functional intentions and communicative aims. Thus, the quoted utterance is characterized by transformations, modifications, and functionalizations according to the speaker’s aims and the new conversational context. Here, prosody and voice quality play important roles. The use of different voices is an interactive resource to contextualize whether an utterance is anchored in the reporting world or in the storyworld, to differentiate between the quoted characters, to signal the particular activity a character is engaged in, and to evaluate the quoted utterance.
In spoken language, prosody and voice quality play important roles in conveying point of view, and in ‘constructing’ the dialogue that is reported. Streeck (2002) discusses how users of spoken language may also include mimetic enactment in their ‘quotations’, particularly those introduced by beClike. He calls such usage “body quotation”: “a mimetic enactment, that is, a performance in which the speaker acts ‘in character’ rather than as situated self” (Streeck 2002, 581). One of his examples (Streeck 2002, 584) is given in (2). gesture “sticking card into”
(2)
But then they’re like “Stick this card into this machine”
Streeck (2002, 591) goes on to describe enactment further: During an enactment, the speaker pretends to inhabit another body ⫺ a human one or that of an alien, perhaps even a machine, or her own body in a different situation ⫺ and animates it with her own body, including the voice. Enactments have the character of samples: They are made out to possess the features of, and to be of the same kind as, the phenomena that they depict. In other words, in enactments, speakers’ expressive behaviors exemplify actions of the story’s characters.
Speakers can thus report the speech, thoughts, and even actions of another, using the syntax of direct quotation. In this way, the speaker’s interpretation of the original actor’s point of view can also be expressed. These observations about reporting can be useful in understanding a range of phenomena in sign languages, discussed next. These phenomena cover the full continuum between reporting the speech (throughout the term ‘speech’ is intended to include signed utterances), thoughts, and actions of another. Previous research has varied between considering the phenomena as quite distinct from each other versus as quite related. It will be argued here that they are indeed related, in ways very similar to the observations just made about spoken languages.
367
368
III. Syntax There have been a variety of proposals for how to analyze these phenomena. These proposals will be reviewed, and the chapter will conclude with a suggestion regarding how future analyses might fruitfully proceed.
2. Early approaches to role shift In early research on the structure of American Sign Language (ASL) and other sign languages, a phenomenon known as ‘role shift’ or ‘role play’ was discussed. The idea was that the grammar of these sign languages included a mechanism whereby signers could shift into the role of a character, conveying information from that character’s perspective. This phenomenon is characteristic of particularly skilled signing, and used especially during story-telling. The descriptions of role shift made it seem like a special way in which sign language could take advantage of the visual modality (Friedman 1975). For example, Mandel (1977, 79⫺80) said: It is common for a signer to take the role of a person being discussed […] When two or more people are being talked about, the signer can shift from one role to another and back; and he usually uses spatial relationships to indicate this ROLE-SWITCHING. In talking about a conversation between two people, for instance, a signer may alternate roles to speak each person’s lines in turn, taking one role by shifting his stance (or just his head) slightly to the right and facing slightly leftward (thus representing that person as being on the right in the conversation), and taking the other role by the reverse position. […] Similar role-switching can occur in nonquotative narrative. […] A signer may describe not only what was done by the person whose role he is playing, but also what happened to that person.
Pfau and Quer (2010, 396) expand on the difference between quotational and nonquotational uses of role shift: Role shift (also known as role taking and referential shift) plays two, sometimes overlapping roles in the grammar of sign languages. First, in its quotational use, it is used to directly report the speech or the unspoken thoughts of a character (also known as constructed discourse). […] Second, in its nonquotational use, role shift expresses a character’s action, including facial expressions and nonlinguistic gestures. That is, the signer embodies the event from the character’s perspective. This embodiment is also referred to as constructed or reported action.
An illustration of role shift is given in Figure 17.1. In this example, the signer indicates the locus of the wife by her eye gaze and lean toward the right during the sign say; then in shifting the shoulders and turning the head facing left she ‘assumes’ the ‘role’ of the wife and the following signs are understood as conveying the wife’s words. Padden (1986, 48⫺49) made the following comments about role-shifting: Role-shifting is marked by a perceptible shift in body position from neutral (straight facing) to one side and a change in the direction of eye gaze for the duration of ‘the role.’ […] in informal terms, the signer ‘assumes’ the ‘role’ […]
17. Utterance reports and constructed action
wife
say
369
rs: wife
[ASL]
Fig. 17.1: Role shift example
‘Role-shifting’ is perhaps an unfortunate term. It suggests structures which resemble playacting; indeed, this is how these structures have been described. […] As it turns out, there are interesting constraints on role-shifting which indicate that its place in the syntactic and discourse system of ASL should be explored further.
Padden (1986, 49⫺50) provided helpful examples of role-shifting, such as those given in (3) and (4). rs: husband
(3)
husband
‘The husband goes, “Really, I didn’t mean it.”’
[ASL]
rs: husband
(4)
husband
‘The husband was like ⫺ “here I am, working.”’
[ASL]
In example (3), the husband’s words or perhaps thoughts are reported by the signer. In example (4), Padden uses beClike for the English translation. As discussed above, quotations introduced with beClike in English frequently represent what Streek (2002) calls “body quotation”. Padden describes the example as not replicating discourse, and offers as an alternative English translation, “The husband was working”. The example may be quoting the husband’s thoughts, but it may be ‘quoting’ just his actions, from his point of view. Lillo-Martin (1995) also noted that what role shift conveys is very similar to what is conveyed with the colloquial English use of like, as in, “He’s like, I can’t believe you did that!” (This use of like is to be distinguished from its use as a hedge or focus marker; Miller/Weinert 1995; Underhill 1988.) Like need not convey direct discourse, but portrays the point of view of its subject. Researchers have examined the use of like as an introducer of “internal dialogue, gesture, or speech” (Ferrara/Bell 1995, 285; cf. also Romaine/Lange 1991). In (5) some natural examples collected by Ferrara and Bell (1995, 266) are given. They could be representations of speech, but may also reflect internal dialogue or attitude, and may well be accompanied by relevant gestures. (5)
a. I was like, “Who is it?” b. You’re like, “Okay.”
370
III. Syntax c. She’s like, “Well I take it y’all are dating now.” d. My Mom’s like, you know, “I trust your driving.” e. So we’re like, “What?” [motorist in another car tries to signal to the narrator that his car is on fire] Padden’s translation of (4) makes explicit this comparison between role shift and the use of English beClike. The point that role shift does not necessarily quote a person’s words or even thoughts is also made in the following examples from Meier (1990, 184). In example (6a), the first-person pronoun (glossed indexs by Meier) is to be interpreted as representing what the girl said. All the rest of the example within the role shift (indicated by 1[ ]1) represents the girl’s actions. In example (6b), no first-person pronoun is used. However, the event is still narrated from the girl’s point of view, as indicated by the notation 1[ ]1, and the eye gaze. The report here represents the girl’s actions as well as her emotional state (scared). (6)
a. yesterday indexs seej girl walk jperson-walk-tok gaze down mm 1[walk.
[ASL]
gaze i
look-upi. gaze i
gaze i
gaze i
man iperson-move-tos. indexs scared. hits]1 ‘Yesterday I saw this girl. She walked by in front of me. She was strolling along, then she looked up and saw this man come up to her. “I’m scared” [she said]. He hit her.’ b. gaze down mm 1[walk.
gaze i
look-upi. gaze i
gaze i
man iperson-move-tos. scared.]1 ‘She was strolling along, then she looked up and saw this man come up to her. She was scared.’ For the purposes of this chapter, all these types of reports are under consideration. Some report the words or thoughts of another (although not necessarily verbatim). Such cases will sometimes be referred to as quotational role shift. Other examples report a character’s emotional state or actions, including, as Mandel pointed out, actions of which the character is recipient as well as agent. These cases will be referred to as non-quotational. What unifies these different types of reports is that they portray the event from the point of view of the character, as interpreted by the speaker. Some analyses treat these different uses of role shift as different aspects of the same phenomenon, while others look at the uses more or less separately. For example, many researchers have focused on the quotational uses of role shift, and they may restrict the term to these uses (including non-verbatim quotation of words or thoughts). Others focus on the non-quotational uses. Kegl (1986) discussed what is considered here a type of non-quotative use of role shift, which she called a role prominence marker ⫺
17. Utterance reports and constructed action specifically, a role prominence clitic. She proposed that this marker is a subject clitic, and that the NP agreeing with it is interpreted with role prominence ⫺ that is, it marks the person from whose perspective the event is viewed. Early researchers concluded that role shift is not the same as direct reported speech, although it is sometimes used for that purpose. Banfield’s (1973, 9) characterization of direct speech, which reflected a then widely-held assumption, was that it “must be considered as a word for word reproduction” of the quoted speech, in contrast to indirect speech. As discussed in section 1, some more recent researchers have rejected this view of direct speech. However, earlier analyses of direct speech would not suffice to account for role shift, since it was clear that role shift is not limited to word-forword reproduction of speech, but is a way of conveying a character’s thoughts, actions, and perspective. Likewise, role shift was early seen as clearly different from indirect speech. One of the important characteristics of quotational role shift is a change in interpretation for first-person pronouns and verb agreement. As in direct quotation, the referent of a first-person pronoun or verb agreement under role shift is not the signer. It is the person whose speech or thoughts are being conveyed. This is illustrated in example (3) above. The signer’s use of the first-person pronoun is not meant to pick out the signer of the actual utterance, but the speaker of the quoted utterance (in this case, the husband). Therefore, an analysis of role shift as indirect speech also would not suffice. Engberg-Pedersen (1993, 1995), working on Danish Sign Language (DSL), divided role shifting into three separate phenomena, as given in (7) and described in the following paragraph (Engberg-Pedersen 1993, 103). Note that Engberg-Pedersen uses the notation ‘1.p’ to refer to the first person pronoun, and ‘locus c’ to refer to the signer’s locus. (7)
1. shifted reference, i.e., the use of pronouns from a quoted sender’s point of view, especially the use of the first person pronoun 1.p to refer to somebody other than the quoting sender; 2. shifted attribution of expressive elements, i.e., the use of the signer’s face and/ or body posture to express the emotions or attitude of somebody other than the sender in the context of utterance; 3. shifted locus, i.e. the use of the sender locus for somebody other than the signer or the use of another locus than the locus c for the signer.
In shifted reference, which Engberg-Pedersen says is confined to direct discourse, the first person pronoun is used to refer to someone other than the signer; that is, the person quoted. In shifted attribution of expressive elements, the signer’s signs, face, and body express the emotions or attitude of another. This may be within a direct discourse, but does not necessarily have to be; it may be within ‘represented thought’. Engberg-Pedersen compares shifted attribution of expressive elements to the use of voice quality to distinguish speakers in reported dialogues in spoken languages. The third category, shifted locus, is similar to shifted reference, in that the signer’s locus is used for reference to another ⫺ but in this case, the signer’s locus is used in verb agreement only, not in overt first-person pronouns. Unlike shifted reference, shifted locus is not limited to direct discourse. Furthermore, according to Engberg-Pedersen, shifted locus is not always marked overtly by a change in body position. (Padden made the same observation about examples such as the one in (4).)
371
372
III. Syntax
a. She looked at him arrogantly (woman’s point of view) [DSL]
b. She looked at him arrogantly (man’s point of view) [DSL]
Fig. 17.2: Distinction between shifted attribution of expressive elements and shifted locus (Reprinted from Engberg-Pedersen 1993 with permission)
Engberg-Pedersen shows interesting ways in which these different characteristics of ‘role play’ are separable. For example, the signer’s locus can be used to refer to one character under shifted locus, while the facial expression conveys the attitude of a different character under shifted attribution of expressive elements. An example from Engberg-Pedersen is given in Figure 17.2. Both panels of Figure 17.2 show the verb look-at, and in both, the signer’s face is used to express the woman’s (i.e., the referent of the grammatical subject’s) point of view. However, the verb agreement is different in the two panels. In Figure 17.2a, the verb shows regular agreement with the object/goal (the man). However, in Figure 17.2b, the verb uses the first-person locus for the object/goal agreement. This means that while the signer’s locus is used to represent the man for purposes of verb agreement (under shifted locus), it is representing the woman for the shifted attribution of expressive elements. Engberg-Pedersen’s characterization makes an explicit claim about the use of firstperson pronouns which needs further consideration. She says that the use of overt first-person pronouns to refer to someone other than the signer is restricted to direct discourse (quotation). However, the signer’s locus (i.e., first person) can be used in verb agreement to pick out someone other than the signer in non-direct-discourse contexts. This contrast will be discussed in section 5. Descriptions of role shift in other sign languages similar to those presented thus far can be found for British Sign Language (BSL, Morgan 1999; Sutton-Spence/Woll 1998), Catalan Sign Language (LSC, Quer/Frigola 2006), German Sign Language (DGS, Herrmann/Steinbach 2011), Nicaraguan Sign Language (ISN, Pyers/Senghas 2007), Quebec Sign Language (LSQ, Poulin/Miller 1995), and Swedish Sign Language (SSL, Ahlgren 1990; Nilsson 2004).
3. Role shift as constructed action Although most discussions of role shift until the mid-1990s differentiated it from reported speech/direct quotation because of the idea that such quotation should be ver-
17. Utterance reports and constructed action
373
batim, some sign language researchers were paying attention to developments in the fields of discourse which recognized the problems with such a claim for direct quotation more generally. They adopted the view of Tannen (1989) that direct quotation should be seen as constructed. Liddell and Metzger (1998), following on work by Winston (1991) and Metzger (1995), describe instances of role shift or role play in ASL as constructed action. Metzger (1995, 261) describes an example, given in (8), in which constructed dialogue is a part of a larger sequence of constructed action. In the example, the signer is portraying a man seated at a card table looking up at another man who is asking for someone named Baker. The example shows the card player’s constructed dialogue, which includes his gesture, raising his hand, and his facial expression and eye gaze. It also includes his constructed action prior to the admission, looking up at the stranger, cooccurring with the sign look-up. The whole example starts with the narrator signing man, to inform the audience of the identity of the character whose actions and utterance will be (re-)constructed next. to addressee
(8)
gaze forward to up left
lower lip extended/head tilt/gaze up left
man cards-in-hand look-up, “that (raise hand) that pro.1” ‘So one of the guys at the table says, “Yeah, I’m Baker, that’s me.”’
[ASL]
This flow between narrator, constructed action, and constructed dialogue is characteristic of ASL stories. As we have seen, however, it is not something special to sign languages, or some way in which sign languages are different from spoken languages. Speakers also combine words, gestures, facial expressions, and changes in voice quality to convey the same range of narrative components. Liddell and Metzger (1998) draw these parallels quite clearly. They aim to point out that parts of a signed event are gestural while other parts are grammatical, just as in the combination of speech, such as “Is this yours?” while pointing to an object such as a pen. They state (Liddell/Metzger 1989, 659), “The gestural information is not merely recapitulating the same information which is grammatically encoded. The addressees’ understanding of the event will depend on both the grammatically encoded information and the gestural information.” This combination of grammatical and gestural is crucially involved in constructed action. Liddell and Metzger use the theory of Mental Spaces proposed by Fauconnier (1985), and the notion of mental space blends discussed by Fauconnier and Turner (1996), to account for the range of meanings expressed using constructed actions. In their view, the signer’s productions reflect a blend of two mental spaces. One of these mental spaces may be the signer’s mental representation of their immediate environment, called Real Space. Other spaces are conceptual structures representing particular aspects of different time periods, or aspects of a story to be reported. In their paper, Liddell and Metzger analyze examples elicited by a Garfield cartoon. Then, the signer’s mental conception of the cartoon, called Cartoon space, can blend with Real Space. Using such a blend, the signer may select certain aspects of the situation to be conveyed in different ways. This can be illustrated with example (9) (Liddell/Metzger 1998, 664⫺665).
374
III. Syntax (9)
cat look-up “oh-shit” cl-x(press remote control) [ASL] ‘The cat looked up at the owner. He thought, “Oh shit” and pressed the remote control.’
As with Metzger’s (1995) example given in (8) above, this example includes the narrator’s labeling of the character, the character’s constructed action (both in the signer’s looking up and in his signed description look-up), and the character’s constructed dialogue (his thoughts). Liddell and Metzger point out that the signer’s hands do not represent the character’s hands during the sign look-up, but that they are constructing the character’s signs during the expletive “oh-shit”. Of course, the cat Garfield does not sign even in the cartoon, but the signer is ‘constructing’ his utterance ⫺ just as speakers might ‘speak’ for a cat (Tannen 1989 gives such examples as part of her argument for dissociating constructed dialogue from verbatim quotation). To illustrate the range of meanings (generally speaking) expressed by different types of constructed action, Liddell and Metzger (1998, 672) give the following table: Tab. 17.1: Types of constructed actions and their significance Types of constructed actions
What they indicate
Articulation of words or signs or emblems Direction of head and eye gaze Facial expressions of affect, effort, etc. Gestures of hands and arms
What the |character| says or thinks Direction |character| is looking How the |character| feels Gestures produced by the |character|
The analysis presented by Liddell and Metzger emphasizes the similarity between constructed action in sign language and its parallels in spoken languages. As discussed earlier, speakers use changes in voice quality, as well as gestures, to ‘take on a role’ and convey their construction of the actions, thoughts, or words of another. These changes and gestures occur together with spoken language elements. It seems clear that the main difference is that, for signers, all these components are expressed by movements of the hands/body/facial expressions, so separating the gesture from the grammatical is more challenging. Other authors have made use of the cognitive linguistics framework account of constructed action proposed by Liddell and Metzger and have extended it in various ways. For example, Aarons and Morgan (2003) discuss the use of constructed action along with classifier predicates and lexical signs to express multiple perspectives sequentially or simultaneously in South African Sign Language. Dudis (2004) starts with the observation that the signer’s body is typically used in constructed action to depict a body. But he argues that actually, not all parts of the signer’s body will be used in the blend, and furthermore, different parts of the signer’s body can be partitioned off so as to represent different parts of the input to the blend. For example, Dudis discusses two ways of showing a motorcyclist going up a hill. In one, the signer’s torso, head, arms, hands, and facial expression all convey the motorcyclist: the hands holding the handles, the head tilted back, looking up the hill, the face showing the effort of the climb. In the second, the signer’s hands are ‘partitioned off’, and used to produce a verb meaning vehicle-goes-up-hill. But the torso, head, and face
17. Utterance reports and constructed action are still constructing aspects of the motorcyclist’s experience. As Dudis (2004, 228) describes it: A particular body part that can be partitioned off from its role in the motorcyclist blend, in this instance the dominant hand. Once partitioned off, the body part is free to participate in the creation of a new element. This development does not deactivate the motorcyclist blend, but it does have an impact. The |motorcyclist’s| hands are no longer visible, but conceptually, they nevertheless continue to be understood to be on the |handles|. This is due to pattern completion, a blending operation that makes it possible to ‘fill in the blanks’.
Dudis shows that in such multiple Real Space blends, different perspectives requiring different scales may be used. One perspective is the participant viewpoint, in which “objects and events […] are described from the perspective of the [participant]. The scalar properties of such a blend, as Liddell (1995) shows, are understood to be lifesized elements, following the scale of similar objects in reality” (Dudis 2004, 230). The other perspective is a global viewpoint. For example, when the signer produces the verb for a motorcycle going uphill, the blend portrayed by the hands uses the global viewpoint. As Dudis (2004, 230) says: The smaller scale of the global perspective depiction involving the |vehicle| is akin to a wide-angle shot in motion-picture production, while the real-space blend containing the participant |signer as actor| is akin to a close-up shot. It is not possible for the |signer as actor| and the |vehicle| to come into contact, and the difference in scale is one reason why.
Janzen (2004) adds some more important observations about the nature of constructed action and its relationship to presenting aspects of a story from a character’s perspective. First, Janzen emphasizes a point made also by Liddell and Metzger (1998), that there is not necessarily any physical change in the body position to accompany or indicate a change in perspective. To summarize (Janzen 2004, 152⫺153): Rather than using a physical shift in space to encode differing perspectives as described above, signers frequently manipulate the spatially constructed scene in their discourse by mentally rotating it so that other event participants’ perspectives align with the signer’s stationary physical vantage point. No body shift toward various participant loci within the space takes place. … [T]he signer has at least two mechanisms ⫺ a physical shift in space or mental rotation of the space ⫺ with which to accomplish this discourse strategy.
Because of the possibility for this mental rotation, Janzen (2004, 153) suggests, “this discourse strategy may represent a more ‘implicit’ coding of perspective (Graumann 2002), which requires a higher degree of inference on the part of the addressee.” This comment may go some way toward explaining a frequent observation, which is that narratives containing a large amount of constructed action are often more difficult for second-language learners to follow (Metzger 1995). Despite the frequent use of gesture in such structures, they can be difficult for the relatively naïve addressee who has the task of inferring who is doing what to whom. Janzen also argues that constructed action does not always portray events from a particular perspective, but is sometimes used to indicate which character’s perspective
375
376
III. Syntax is excluded. To indicate perspective shifts towards and away from a character an alternate character might be employed, but the choice of alternate character may be less important than the simple shift away. In fact, Janzen claims that these perspective shifts can also be used with unobserved events, indicating (e.g., by turning the head away) that a character is unaware of the event, and not involved in it. In such cases, body partitioning such as Dudis describes is needed: the head/eyes show the perspective of the non-observer, while the hands may sign or otherwise convey the unseen event.
4. Formal approaches The description of role shift as a type of constructed action recognizes that many components of this phenomenon are analogous to the use of gestures and changes in voice quality during narration in spoken languages. However, some researchers have nevertheless been interested in pursuing a formal analysis of certain aspects of role shift, particularly the change in reference for the first-person pronoun. Lillo-Martin (1995) compared shifted reference of first-person pronouns with the use of a logophoric pronoun in some spoken languages. In languages such as Abe, Ewe, and Gokana, a so-called ‘logophoric pronoun’ is used in the embedded clause of certain verbs, especially verbs that convey another’s point of view, to indicate co-reference with a matrix subject or object (Clements 1975; Hyman/Comrie 1981; Koopman/ Sportiche 1989). In the example in (10a) (Clements 1975, 142), e is the non-logophoric pronoun, which must pick out someone other than the matrix subject, Kofi. In (10b), on the other hand, yè is the logophoric pronoun, which must be co-referential with Kofi. (10)
a. Kofi be e-dzo Kofi say pro-leave ‘Kofii said that hej left.’ b. Kofi be yè-dzo Kofi say Log-leave ‘Kofii said that hei left.’
[Ewe]
Lillo-Martin (1995) proposed that the ASL first-person pronominal form can serve as a logophoric pronoun in addition to its normal use. Thus, in logophoric contexts (within the scope of a referential shift), the logophoric pronoun refers to the matrix subject, not the current signer. Lillo-Martin further proposed that ASL referential shift involves a point of view predicate, which she glossed as pov. pov takes a subject which it agrees with, and a clausal complement (see Herrmann/Steinbach (2011) for an analysis of role shift as a non-manual agreement operator). This means that the ‘quoted’ material is understood as embedded whether or not there is an overt matrix verb such as say or think. Any first-person pronouns in the complement to the pov predicate are logophoric; they are interpreted as co-referential with the subject of pov. According to Lillo-Martin’s (1995, 162) proposal, the structure of a sentence with pov, such as (11), is as in (12).
17. Utterance reports and constructed action
amom apov 1pronoun
busy. ‘Mom (from mom’s point of view), I’m busy.’ = ‘Mom’s like, I’m busy!’
[ASL]
(12)
According to the structure in (12), pov takes a complement clause. This CP is introduced by an abstract syntactic operator, labeled Op. The operator is bound by the subject of pov ⫺ the subject c-commands it and they are co-indexed. The operator also binds all logophoric pronouns which it c-commands ⫺ hence, all 1pronouns in the complement clause are interpreted as coreferential with the subject of pov. Lee et al. (1997) argue against Lillo-Martin’s analysis of role shift. They focus on instances of role shift introduced by an overt verb of saying, as in the example given in Figure 17.1 above, or example (13) below (Lee et al. 1997, 25). rsi
(13)
johni say ix1pi want go ‘John said: “I want to go.”’
[ASL]
Lee et al. argue that there is no reason to consider the material following the verb of saying as part of an embedded clause. Instead, they propose that this type of role shift is simply direct quotation. As with many spoken languages, the structure would then involve two logically related but syntactically independent clauses. Lee et al. suggest that the use of non-manual marking at the discourse level, specifically head tilt and eye gaze, functions to identify speaker and addressee. Since Lee et al. only consider cases with an overt verb of saying, they do not include in their analysis non-quotational role shift. The possibility that both quotational and non-quotational role shift might be analyzed as forms of direct discourse will be taken up in more detail in section 5. The analysis of role shift, particularly with respect to the issue of shifting reference, was recently taken up by Zucchi (2004) and Quer (2005, 2011). Zucchi and Quer are both interested in a theoretical claim made on the basis of spoken language research by Kaplan (1989). Kaplan makes the following claim about indexicals, as summarized by Schlenker (2003, 29): “the value of an indexical is fixed once and for all by the context of utterance, and cannot be affected by the logical operators in whose scope it
378
III. Syntax may appear”. In other words, we understand indexicals based on the context, but their reference does not change once the context is established. Consider the examples in (14)⫺(15), modified from Schlenker (2003). (14)
a. John thinks that I am a hero. b. John thinks that he is a hero.
(15)
a. John says that I am a hero. b. John says that he is a hero.
In English, the (a) examples cannot be interpreted as the (b) examples ⫺ that is, the reference of ‘I’ must be taken to be the speaker; it does not change to represent the speaker or thinker of the reported event (John). It is of course this shifting of reference which takes place in direct discourse in English, as in (16). This case is specifically excluded from Kaplan’s concern. (16)
John says, “I am a hero.”
Kaplan’s claim is that no language can interpret indexicals in non-direct discourse contexts as shifted, in the way that they are interpreted in direct discourse. He says that if an operator existed which would allow such a shift, it would be a ‘monster’. Schlenker (2003) objects to Kaplan’s claim on the basis of evidence from a number of languages that do, he claims, allow such ‘monsters’. One type of example comes from logophoric pronouns, which were discussed earlier. Clearly logophoric pronouns seem to do exactly what Kaplan’s monsters would do, providing counter-evidence for his claim that they do not exist. On the other hand, it is important not to allow indexicals to shift willy-nilly, for surely this would lead to results incompatible with any natural language. Schlenker’s solution is to establish context variables introduced by matrix verbs such as ‘say’ or ‘think’, according to which shifting indexicals will be interpreted. In different languages, different indexicals will be identified as to the domain within which they must be interpreted. Zucchi (2004) considers whether role shift in sign language is another example showing that monsters do in fact exist. His data focus on Italian Sign Language (LIS), but it appears that the basic phenomenon is the same as we have seen for other sign languages as well. Zucchi assumes that the quotational and non-quotational uses of role shift are distinct in terms of at least some of the structures they use. As for the quotational use of role shift, this would not be problematic for Kaplan’s claim should this use be equivalent to direct discourse, since direct discourse has already been excluded. However, Zucchi argues that non-quotational role shift still shows that the interpretation of indexicals must be allowed to shift in non-direct discourse contexts. In this context, a claim made by Engberg-Pedersen (1993), cited in (7) above, becomes very relevant. Recall that Engberg-Pedersen claimed that (DSL) firstperson pronouns are only used in the shifted way within direct discourse. If shifted pronouns can only be used in direct discourse, is there any ‘monster’ to be concerned about?
17. Utterance reports and constructed action The answer is ‘yes’. Numerous examples of role shift, including those provided by Engberg-Pedersen, show that the verb may be produced with first-person agreement which is interpreted as shifted, just as first-person pronouns are shifted. This is what Engberg-Pedersen calls ‘shifted locus’ (as opposed to ‘shifted reference’). The issue of why direct discourse allows shifted pronouns, while other cases of role shift only allow shifted locus, will be discussed in section 5. For now, the important point is that verb agreement with first person is just as ‘indexical’ as a first-person pronoun for the issue under discussion. With this in mind, Zucchi pursues a common analysis of shifting indexicals in quotational and non-quotational contexts. It has three parts. The first part is the introduction of another variable, this one for the speaker/signer (σ). Ordinarily, this variable will refer to the speaker/signer of the actual utterance. However, Zucchi proposes that the grammar of LIS also includes a covert operator which assigns a different value to the variable σ. Furthermore, he proposes that the non-manual markings of a role shift “induce a presupposition on the occurrence of the signer’s variable, namely the presupposition that this variable denotes the individual corresponding to the position toward which the body (or the eye gaze, etc.) shifts” (Zucchi 2004, 14). In order to satisfy this presupposition in shifted contexts, the operator that assigns a different value to the speaker/signer variable must be invoked. Why does Zucchi use presuppositional failure to motivate the use of the operator? It is because he seeks a unified analysis of quotational and non-quotational shifts. He argues that the non-manual marking is “not in itself a grammatical marker of quotes or of non quotational signer shift (two functions that could hardly be accomplished by a single grammatical element)” (Zucchi 2004, 15⫺16). The nonmanual marking simply indicates that the presupposition regarding the σ variable is at stake. Does this analysis show that there are, indeed, monsters of the type Kaplan decried? In fact, Zucchi argues that neither the operator he proposes for role shift nor the examples used by Schlenker actually constitute monsters. On Zucchi’s analysis of LIS role shift, it is important that only the signer be interpreted as shifted. Then, the role shift operators do not change all of the features of the context, and therefore it is not a monster. However, Quer (2005, 2011) suggests that Zucchi’s analysis may be oversimplified. He proposes a different solution to the problem, although like Zucchi his goal is to unify analysis of shifting indexicals in quotational and non-quotational uses of role shift, bringing in new data from Catalan Sign Language (LSC). Quer’s proposal moves the discussion further by bringing in data on the shifting (or not) of indexicals in addition to pronouns, such as temporal and locative adverbials. Relatively little research on role shift has mentioned the shiftability of these indexicals, so clearly more research is needed on their behavior. According to Quer, such indexicals show variable behavior in LSC. Importantly, some may shift within the context of a role shift, while others may not. Herrmann and Steinbach (2011) report a similar variability in context shift for locative and temporal indexicals in German Sign Language (DGS). Consider the examples in (17) (Quer 2005, 153⫺154):
379
380
III. Syntax t
(17)
RS-i
a. ixa madrid joani think ix-1i study finish here madrid [LSC] ‘When he has in Madrid, Joan thought he would finish his studies there in Madrid.’ t
RS-i
b. ixa madridm moment joani think ix-1i study finish hereb ‘When he was in Madrid, Joan thought he would finish his study in Barcelona.’ According to Quer, when under the scope of role shift the locative adverbial here can be interpreted vis-à-vis the context of the reported event (as in (17a)), or the context of the utterance (as in (17b), if it is uttered while the signer is in Barcelona). As long as adverbials can shift as well as pronouns, it is clear that none of the previous formal analyses, which focused on the shift of the pronoun exclusively, is adequate. Amending such analyses by adding temporal adverbials to the list of indexicals that may shift would lead to an unnecessarily complex analysis, if instead an alternative analysis can be developed which would include both pronominal and adverbal indexicals. This is the approach pursued by Quer. Quer’s analysis builds on the proposals of Lillo-Martin (1995), but implements them in a very different way. He proposes that role shift involves a covert Point of View Operator (PVOp), which is an operator over contexts a là Schlenker, sitting in a high functional projection in the left periphery of the clause. While Lillo-Martin’s analysis has a pov predicate taking a complement clause as well as an operator binding indexical pronouns, Quer’s proposal simplifies the structure involved while extending it to include non-pronominal indexicals. Although the PVOp proposed by Quer is covert, he claims that it “materializes in RS nonmanual morphology” (Quer 2005, 161). In this way, he claims, it is similar to other sign language non-manual markers that are argued to be realizations of operators. Quer’s proposal is of especial interest in regards to the possibility that some indexicals shift while others do not, as illustrated in (17b) earlier. As he notes, such examples violate the ‘Shift Together Constraint’ proposed by Anand and Nevins (2004), which states that the various indexicals in a shifting context must all shift together. Examples like this should be considered further, and possibly fruitfully compared with ‘free indirect discourse’, or ‘mixed quotation’, mixing aspects of direct and indirect quotation (Banfield 1973 and recent work by Cuming 2003, Sharvit 2008, among others).
5. Integration This chapter has summarized two lines of analysis for role shift in sign languages. One line compares it to constructed action, and subsumes all types of reports (speech, thoughts, actions) under this label. The other line attempts to create formal structures for role shifting phenomena, focusing in some cases on the syntactic structures involved and in other cases on the semantics needed to account for shifting indexicals.
17. Utterance reports and constructed action What is to be made of these various approaches to role shift in sign languages? Is this a case of irreconcilable differences in theoretical foundations? Perhaps the questions one side asks are simply not sensible to the other. However, there are important aspects to both approaches, and a direction is suggested here for gaining from both views, which may result eventually in a more comprehensive analysis than either of the approaches alone. To begin with, the comparison between role shift and constructed action is quite apt. As happens not infrequently when comparing aspects of sign and spoken language, the sign phenomena can lead to a broadening of our consideration of what languages do, not because sign languages are so different from spoken languages, but because there is more going on in spoken languages than previously considered. Let us take into consideration what speakers do with gestures, facial expressions, and changes in voice quality alongside their words. As Liddell (1998) points out, what speakers do and what signers do is actually rather similar. Constructed dialogue portrays much more than a verbatim replication of another’s spoken words. Just as in role play, thoughts can be ‘quoted’, and the narrator’s point of view can shift with those of a character represented (shifted attribution of expressive elements). Furthermore, co-speech gestures may participate in constructed action more generally, giving more information about how a character performed an action, or other aspects of the character’s viewpoint. If role shift is constructed action, and constructed action is an expanded conception of direct discourse, what kinds of formal structures are involved? De Vries (2008) shows that direct quotation in spoken languages can take a number of syntactic forms. Importantly, he shows that quotational clauses have the structure of main clauses, not embedded clauses. This is in line with the proposal of Lee et al. that role shift involves a syntactically independent clause, not an embedded clause. How can the shifting of indexicals be integrated into this proposal? First, consider the quotative use of role shift. For many researchers, direct quotation sets up a domain which is opaque to semantic analysis. For example, de Vries (2008) follows Clark and Gerrig (1990) in considering quotation to be pragmatically demonstration. He argues that, syntactically, direct quotation can take a variety of forms, but the quoted form is inserted as atomic. His proposal takes the following form (de Vries 2008, 68): I conclude that quotation can be viewed as a function ⫺ call it quote α ⫺ that turns anything that can pragmatically serve as a (quasi-)linguistic demonstration into a syntactic nominal category: (62)
quote α: f.. .. (α) / [N “α”]
The quotation marks in the output are a provisional notational convention indicating that α is pragmatically a demonstration, and also that α is syntactically opaque. If α itself is syntactically complex, it can be viewed as the result of a previous derivation.
On such an analysis, the quoted material is inserted into a sentence but its semantic content is not analyzed as part of the larger sentence. Rather, the content would presumably be calculated in the ‘previous derivation’ where the syntactically complex quoted material is compiled. In this case, interpretation of shifters would take
381
382
III. Syntax place according to the context of the quotation (when quoting Joan, ‘I’ refers to the quoted speaker, Joan). So, if quotation is simply a demonstration, there might be no issue with the shifting of indexicals. Thus, quotative role shift might not pose any particular challenges to the formal theorist. What about its non-quotative uses? Now we must confront the issue of which indexicals shift in non-quotative role shift. Recall Engberg-Pedersen’s claims that first person pronouns shift only in direct discourse. As was pointed out in the previous section, the fact that first person agreement is used on verbs in non-quotative role shift indicates that some account of shifting is still needed. But why should the shifting of first-person pronouns be excluded from non-quotative role shift? The answer might be that it’s not that the pronoun used to pick out the character whose point of view is being portrayed fails to shift, but rather that no pronouns ⫺ or noun phrases ⫺ are used to name this character within non-quotative role shift. This type of constructed action focuses on the action, without naming the participants within the scope of the shift. This is true for all the examples of non-quotative role shift presented thus far. Consider also Zucchi’s (2004, 6) example of nonquotative role shift, given below in (18) (Zucchi uses the notation ‘/Gianni’ to indicate role shift to Gianni). /Gianni
(18)
gianni arrive book I⫺donate⫺you ‘When Gianni will come, he’ll give you a book as a present.’
[LIS]
In this example, the agent (gianni) and the theme (book) are named, but before the role shift occurs. The role shift co-occurs with the verb and its agreement markers. This mystery is not solved, but made somewhat less mysterious, by considering again the comparison between sign language and spoken language. In a spoken English narration of the story of Goldilocks and the Three Bears, a speaker might gesture along with the verb in examples such as (19). In these examples, the verb and gesture constitute a type of constructed action. (19)
a. And she ate it all up. g(eating) b. And she was, like, eating it all up. g(eating)
However, if the speaker adds a first-person pronoun, as in (20), the interpretation changes to quotation. As usual with beClike, the report need not be an actual verbatim quote of what the character said (in the story), but may be a report of her thoughts. But the interpretation changes sharply in comparison to the example with no pronoun. (20)
And she was, like, I’m eating it all up. g(eating)
So it seems to be a more general property of non-quotational constructed action that rules out the use of any pronoun (or noun phrase), referring to the character
17. Utterance reports and constructed action whose point of view is being portrayed, not a restriction against first-person shifting pronouns. What about other indexical elements, such as temporal or locative adverbs? No examples of non-quotational role shifting with shifted indexicals other than first-person agreement have been reported. This is clearly a matter for additional research. With this in mind, a system is needed to accommodate the shifting nature of first-person verb agreement (and possibly other indexicals) under non-quotational role shift. The proposal by Quer (2005, 2011) has the necessary components: an operator over contexts which can (if needed) separately account for the shifting of different indexicals. This type of approach can then account for the full range of phenomena under consideration here.
6. Conclusion In recent years, there have been two approaches to role shift in sign languages. One approach makes the comparison between role shift and constructed action (including constructed dialogue). This approach highlights similarities between constructed action in sign languages and the use of voice quality and gestures for similar purposes in spoken languages. The second approach brings formalisms from syntax and semantics to understanding the nature of the shifted indexicals in role shift. This approach also makes comparisons between sign languages and spoken languages, finding some possible similarities between the shifting of indexicals in role shift and in logophoricity and other spoken language phenomena. More research is needed, particularly in determining the extent to which different indexicals may or may not shift together in both quotative and non-quotative contexts across different sign languages. Do these comparisons imply that there is no difference between signers and speakers in their use of constructed action and shifting indexicals? There is at least one way in which they seem to be different. Quinto-Pozos (2007) asks to what degree constructed action is obligatory for signers. He finds that at least some signers find it very difficult to describe certain scenes without the use of different markers of constructed action (body motions which replicate or indicate the motions of depicted characters). He suggests that there may be differences in the relative obligatoriness of constructed action in sign vs. speech. Exploring this possibility and accounting for it will be additional areas of future research. Acknowledgements: The research reported here was supported in part by Award Number R01DC00183 from the National Institute on Deafness and Other Communication Disorders. The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institute on Deafness and Other Communication Disorders or the National Institutes of Health.
383
384
III. Syntax
Notation specific to this chapter rs role shift /Gianni role shift |character| in the notation of works by Liddell and colleagues, words in vertical line brackets label ‘grounded blend elements’
7. Literature Aarons, Debra/Morgan, Ruth 2003 Classifier Predicates and the Creation of Multiple Perspectives in South African Sign Language. In: Sign Language Studies 3(2), 125⫺156. Ahlgren, Inger 1990 Deictic Pronouns in Swedish and Swedish Sign Language. In: Fischer, Susan D./ Siple, Patricia (eds.), Theoretical Issues in Sign Language Research, Volume 1: Linguistics. Chicago: The University of Chicago Press, 167⫺174. Anand, Pranav/Nevins, Andrew 2004 Shifty Operators in Changing Contexts. In: Young, Robert (ed.), Proceedings of SALT 14. Ithaca, NY: CLC Publications, 20⫺37. Banfield, Ann 1973 Narrative Style and the Grammar of Direct and Indirect Speech. In: Foundations of Language 10, 1⫺39. Clark, Herbert/Gerrig, Richard 1990 Quotations as Demonstrations. In: Language 66, 764⫺805. Clements, George N. 1975 The Logophoric Pronoun in Ewe: Its Role in Discourse. In: Journal of West African Languages 2, 141⫺171. Cumming, Samuel 2003 Two Accounts of Indexicals in Mixed Quotation. In: Belgian Journal of Linguistics 17, 77⫺88. Davidson, Donald 1984 Quotation. In: Davidson, Donald (ed.), Inquiries into Truth and Interpretation. Oxford: Clarendon Press, 79⫺92. Dudis, Paul G. 2004 Body Partitioning and Real-Space Blends. In: Cognitive Linguistics 15(2), 223⫺238. Emmorey, Karen/Reilly, Judy (eds.) 1995 Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum Associates. Engberg-Pedersen, Elisabeth 1993 Space in Danish Sign Language. Hamburg: Signum. Engberg-Pedersen, Elisabeth 1995 Point of View Expressed through Shifters. In: Emmorey, Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum Associates, 133⫺154. Fauconnier, Gilles 1985 Mental Spaces: Aspects of Meaning in Natural Language. Cambridge: Cambridge University Press. Fauconnier, Gilles/Turner, Mark 1996 Blending as a Central Process of Grammar. In: Goldberg, Adele (ed.), Conceptual Structure, Discourse and Language. Stanford, CA: CSLI Publications, 113⫺130.
17. Utterance reports and constructed action Ferrara, Kathleen/Bell, Barbara 1995 Sociolinguistic Variation and Discourse Function of Constructed Dialogue Introducers: The Case of be C like. In: American Speech 70(3), 265⫺290. Fischer, Susan D./Siple, Patricia (eds.) 1990 Theoretical Issues in Sign Language Research, Volume 1: Linguistics. Chicago: The University of Chicago Press. Friedman, Lynn 1975 Space, Time, and Person Reference in American Sign Language. In: Language 51, 940⫺961. Graumann, Carl F. 2002 Explicit and Implicit Perspectivity. In: Graumann, Carl F./Kallmeyer, Werner (eds.), Perspective and Perspectivation in Discourse. Amsterdam: Benjamins, 25⫺39. Günthner, Susanne 1999 Polyphony and the ‘Layering of Voices’ in Reported Dialogues: An Analysis of the Use of Prosodic Devices in Everyday Reported Speech. In: Journal of Pragmatics 31, 685⫺708. Herrmann, Annika/Steinbach, Markus 2012 Quotation in Sign Languages ⫺ A Visible Context Shift. In: Alphen, Ingrid van/ Buchstaller, Isabelle (eds.), Quotatives: Cross-linguistic and Cross-disciplinary Perspectives. Amsterdam: Benjamins, 203⫺228. Hyman, Larry/Comrie, Bernard 1981 Logophoric Reference in Gokana. In: Journal of African Languages and Linguistics 3, 19⫺37. Janzen, Terry 2004 Space Rotation, Perspective Shift, and Verb Morphology in ASL. In: Cognitive Linguistics 15(2), 149⫺174. Kaplan, David 1989 Demonstratives. In: Almog, Joseph/Perry, John/Wettstein, Howard (eds.), Themes from Kaplan. Oxford: Oxford University Press, 481⫺563. Kegl, Judy 1986 Clitics in American Sign Language. In: Borer, Hagit (ed.), Syntax and Semantics, Volume 19: The Syntax of Pronominal Clitics. New York: Academic Press, 285⫺365. Koopman, Hilda/Sportiche, Dominique 1989 Pronouns, Logical Variables, and Logophoricity in Abe. In: Linguistic Inquiry 20, 555⫺588. Lee, Robert G./Neidle, Carol/MacLaughlin, Dawn/Bahan, Benjamin/Kegl, Judy 1997 Role Shift in ASL: A Syntactic Look at Direct Speech. In: Neidle, Carol/MacLaughlin, Dawn/Lee, Robert G. (eds.), Syntactic Structure and Discourse Function: An Examination of Two Constructions in American Sign Language. Manuscript, American Sign Language Linguistic Research Project. Boston, MA: Boston University, 24⫺45. Liddell, Scott K. 1995 Real, Surrogate, and Token Space: Grammatical Consequences in ASL. In: Emmorey, Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum Associates, 19⫺41. Liddell, Scott K. 1998 Grounded Blends, Gestures, and Conceptual Shifts. In: Cognitive Linguistics 9, 283⫺314. Liddell, Scott K./Metzger, Melanie 1998 Gesture in Sign Language Discourse. In: Journal of Pragmatics 30, 657⫺697. Lillo-Martin, Diane 1995 The Point of View Predicate in American Sign Language. In: Emmorey, Karen/ Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum Associates, 155⫺170.
385
386
III. Syntax Mandel, Mark 1977 Iconic Devices in American Sign Language. In: Friedman, Lynn A. (ed.), On the Other Hand: New Perspectives on American Sign Language. New York: Academic Press, 57⫺107. Meier, Richard P. 1990 Person Deixis in American Sign Language. In: Fischer, Susan D./Siple, Patricia (eds.), Theoretical Issues in Sign Language Research, Volume 1: Linguistics. Chicago: The University of Chicago Press, 175⫺190. Metzger, Melanie 1995 Constructed Dialogue and Constructed Action in American Sign Language. In: Lucas, Ceil (ed.), Sociolinguistics in Deaf Communities. Washington, DC: Gallaudet University Press, 255⫺271. Miller, Jim/Weinert, Regina 1995 The Function of LIKE in Dialogue. In: Journal of Pragmatics 23, 365⫺393. Morgan, Gary 1999 Event Packaging in British Sign Language Discourse. In: Winston, Elizabeth (ed.), Story Telling & Conversation: Discourse in Deaf Communities. Washington, DC: Gallaudet University Press, 27⫺58. Nilsson, Anna-Lena 2004 Form and Discourse Function of the Pointing toward the Chest in Swedish Sign Language. In: Sign Language & Linguistics 7(1), 3⫺30. Padden, Carol 1986 Verbs and Role-Shifting in American Sign Language. In: Padden, Carol (ed.), Proceedings of the Fourth National Symposium on Sign Language Research and Teaching. Silver Spring, MD: National Association of the Deaf, 44⫺57. Pfau, Roland/Quer, Josep 2010 Nonmanuals: Their Prosodic and Grammatical Roles. In: Brentari, Diane (ed.), Sign Languages. (Cambridge Language Surveys.) Cambridge: Cambridge University Press, 381-402. Poulin, Christine/Miller, Christopher 1995 On Narrative Discourse and Point of View in Quebec Sign Language. In: Emmorey, Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum Associates, 117⫺131. Pyers, Jennie/Senghas, Ann 2007 Reported Action in Nicaraguan and American Sign Languages: Emerging Versus Established Systems. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 279⫺302. Quer, Josep 2005 Context Shift and Indexical Variables in Sign Languages. In: Georgala, Effi/Howell, Jonathan (eds.), Proceedings from Semantics and Linguistic Theory 15. Ithaca, NY: CLC Publications, 152⫺168. Quer, Josep 2011 Reporting and Quoting in Signed Discourse. In: Brendel, Elke/Meibauer, Jörg/Steinbach, Markus (eds.), Understanding Quotation. Berlin: Mouton de Gruyter, 277⫺302. Quer, Josep/Frigola, Santiago 2006 The Workings of Indexicals in Role Shift Structures in Catalan Sign Language (LSC). Actes del 7è Congrés de Lingüística General, Universitat de Barcelona. CD-ROM. Quine, Willard V. O. 1969 Word and Object. Cambridge, MA: MIT Press. Quinto-Pozos, David 2007 Can Constructed Action be Considered Obligatory? In: Lingua 117(7), 1285⫺1314.
17. Utterance reports and constructed action Romaine, Suzanne/Lange, Deborah 1991 The Use of Like as a Marker of Reported Speech and Thought: A Case of Grammaticalization in Progress. In: American Speech 66, 227⫺279. Schlenker, Philippe 2003 A Plea for Monsters. In: Linguistics & Philosophy 26, 29⫺120. Sharvit, Yael 2008 The Puzzle of Free Indirect Discourse. In: Linguistics & Philosophy 31, 351⫺395. Shepard-Kegl, Judy 1985 Locative Relations in ASL Word Formation, Syntax and Discourse. PhD Dissertation, MIT. Streeck, Jürgen 2002 Grammars, Words, and Embodied Meanings: On the Uses and Evolution of So and Like. In: Journal of Communication 52(3), 581⫺596. Sutton-Spence, Rachel/Woll, Bencie 1998 The Linguistics of British Sign Language. Cambridge: Cambridge University Press. Tannen, Deborah 1989 Talking Voices: Repetition, Dialogue, and Imagery in Conversational Discourse. Cambridge: Cambridge University Press. Underhill, Robert 1988 Like is, Like, Focus. In: American Speech 63, 234⫺246. Vries, Mark de 2008 The Representation of Language within Language: A Syntactico-Pragmatic Typology of Direct Speech. In: Studia Linguistica 62, 39⫺77. Winston, Elizabeth A. 1991 Spatial Referencing and Cohesion in an American Sign Language Text. In: Sign Language Studies 73, 397⫺410. Zucchi, Alessandro 2004 Monsters in The Visual Mode? Manuscript, Università degli Studi di Milano.
Diane Lillo-Martin, Storrs, Connecticut (USA)
387
IV. Semantics and pragmatics 18. Iconicity and metaphor 1. 2. 3. 4. 5. 6.
Introduction Iconicity in linguistic theory Examination of linguistic iconicity Relevance of iconicity to sign language use Conclusion Literature
Abstract Iconicity, or form-meaning resemblance, is a common motivating principle for linguistic items in sign and spoken languages. The combination of iconicity with metaphor and metonymy allows for iconic representation of abstract concepts. Sign languages have more iconic items than spoken languages because the resources of sign languages lend themselves to presenting visual, spatial, and motor images, whereas the resources of spoken languages only lend themselves to presenting auditory images. While some iconicity is lost as languages change over time, other types of iconic forms remain. Despite its pervasiveness in sign languages, iconicity seems to play no role in acquisition, recall, or recognition of lexical signs in daily use. It is important, however, for the use of key linguistic systems for description of spatial relationships (i.e., classifier constructions and possibly pronoun systems). Moreover, language users are able to exploit perceived iconicity spontaneously in language play and poetic usage.
1. Introduction It has long been noticed that in some cases, there is a resemblance between a concept and the word or sign a community uses to describe it; this resemblance is known as iconicity. For example, Australian Sign Language (Auslan), Sign Language of the Netherlands (NGT), South African Sign Language (SASL), South Korean Sign Language (SKSL), and other sign languages use a form similar to that shown in Figure 18.1 to represent the concept ‘book’ (Rosenstock 2004). The two flat hands with the palms facing upwards and touching each other bear a resemblance to a prototypical book. Iconicity motivates but does not determine the form of iconic signs. For example, Chinese Sign Language (CSL), Danish Sign Language (DSL), and American Sign Language (ASL) all have iconic signs for the concept ‘tree’, but each one is different (Klima/Bellugi 1979). Though iconic linguistic items and grammatical structures are common in both spoken and sign languages, their role in linguistic theory and in the language user’s mind/ brain has long been debated. In section 2 below, we will briefly cover the history of
18. Iconicity and metaphor
Fig. 18.1: book in several sign languages
linguistic treatments of iconicity. Section 3 gives an overview of lexical, morphological, and syntactic iconicity in sign languages, with a few spoken language examples for comparison; and section 4 treats the relevance of iconicity to daily language use and historical change. As we shall see, iconicity is pervasive in human languages. While it appears to play little or no role in daily use of lexical signs and words, it is crucial to the use of certain spatially based linguistic structures, and may be freely exploited for spontaneous language play.
2. Iconicity in linguistic theory The simple definition of iconicity is ‘signs that look like what they mean’. In this section, we shall see that this definition is not adequate, and modify it to include cultural and conceptual factors. We will also trace the history of linguists’ attitudes toward iconicity, noting that an increasing sophistication in linguistic definitions of iconicity has paralleled an increasing acceptance of iconicity in linguistic theory and sign language research. (Note that the role of iconicity in phonological theory is not addressed in this chapter; for discussion see van der Kooij (2002) and chapter 3, Phonology.)
2.1. ‘Transparency’ is not an adequate measure of iconicity Given the simple definition of iconicity as ‘form-meaning resemblance’, we might expect that we could use ‘guessability’ (also called transparency) as a measure of a sign’s iconicity ⫺ after all, if an iconic sign looks like what it means, a naïve observer ought to be able to figure out the meaning. On the other hand, several researchers found that non-signers had difficulty guessing the meaning of ASL iconic signs from their forms (Hoemann 1975; Klima/Bellugi 1979), even though many were clearly iconic in that, once the meaning was known, a connection could be seen between form and meaning. This result indicated that fluent signers have to know the meaning of the sign beforehand, and do not simply deduce the meaning from its form.
389
390
IV. Semantics and pragmatics Pizzuto and Volterra (2000) studied the interaction between culture, conventionalization, and iconicity by testing the ability of different types of naïve subjects to guess the meanings of signs from Italian Sign Language (LIS). They found strong culturebased variation: some signs’ meanings were easily guessed by non-Italian non-signers; some were more transparent to non-Italian Deaf signers; and others were easier for Italian non-signers to guess. That is, some transparency seemed to be universal, some seemed linked to the experience of Deafness and signing, and some seemed to have a basis in Italian culture. In interpreting these results, we can see the need for a definition of iconicity that takes culture and conceptualization into account. Iconicity is not an objective relationship between image and referents. Rather, it is a relationship between our mental models of image and referents (Taub 2001). These models are partially motivated by experiences common to all humans, and partially by experiences particular to specific cultures and societies.
2.2. Cultural/conceptual definition of iconicity First, consider the notion of ‘resemblance’ between a linguistic item’s form and its meaning. Resemblance is a human-defined, interactional property based on our ability to create conceptual mappings (Gentner/Markman 1997). We feel that two things resemble each other when we can establish a set of correspondences (or mapping) between our image of one and our image of the other. To be more precise, then, in linguistic iconicity there is a mapping between the phonetic form (sound sequence, handshape or movement, temporal pattern) and some mental image associated with the referent. As noted above, these associations are conceptual in nature and often vary by culture. To illustrate this point, consider Figure 18.2, which presents schematic images of human legs and the forefinger and middle finger extended from a fist. We feel that the two images resemble each other because we set up a mapping between the parts of each image. Once we have established this mapping, we can ‘blend’ the two images (Fauconnier 1997; cf. Liddell 2003) to create a composite structure: an iconic symbol whose form resembles an aspect of its meaning. A number of sign languages have
Fig. 18.2: Structure-preserving correspondences between a) human legs and b) extended index and middle fingers.
18. Iconicity and metaphor
Fig. 18.3: The ASL sign drill
used this particular V-handshape (W) to mean ‘two-legged entity’. This form/meaning package is thus an iconic item in those sign languages. Iconic items, though motivated by resemblance to a referent image, are not universal. In our example, the human body has been distilled down to a schematic image of a figure with two downward-pointing appendages. Other sign languages, though they seem to work from the same prototypical image of a human body, have chosen to represent different details: sometimes the head and torso, sometimes the legs, and sometimes both receive special attention in iconic representation. The index finger extended upward from a fist, the thumb extended upward from a fist, and the thumb extended upward with the little finger extended downward, are all phonetic forms used in sign languages to represent the human body. This chapter will distinguish between plain iconicity and extensions of iconicity via metaphor or other conceptual associations. In iconic items, some aspect of the item’s phonetic form (shape, sound, temporal structure, etc.) resembles a physical referent. That is, a linguistic item which involves only iconicity can only represent a concrete item that we can perceive. If a form has an abstract meaning, yet appears to give an iconic depiction of some concrete image, that case involves iconicity linked with metaphor or metonymy. Thus, the ASL sign drill (Figure 18.3), whose form resembles a drill penetrating a wall, is purely iconic: its form directly resembles its meaning.
Fig. 18.4: The ASL sign think-penetrate
391
392
IV. Semantics and pragmatics On the other hand, there is more than just iconicity in signs such as ASL thinkpenetrate (Figure 18.4), whose form resembles an object emerging from the head (@ handshape) and piercing through a barrier (v-handshape). think-penetrate, which can be translated as ‘to finally get the point’, has a non-concrete meaning. The image of an object penetrating a barrier is used to evoke the meaning of effortful but ultimately successful communication. This use of a concrete image to describe an abstract concept is an instance of conceptual metaphor (Lakoff/Johnson 1980), and think-penetrate is thus metaphorical as well as iconic (see section 3.5 for more detail).
2.3. History of attitudes toward iconicity There has been a long history of minimizing and dismissing iconicity in language, starting with de Saussure’s (1983 [1916]) doctrine of the ‘arbitrariness of the sign’, which states that there is no natural connection between a concept and the word used to represent it. De Saussure’s statement was aimed at countering a naïve view of iconicity, one that would attempt to derive the bulk of all languages’ vocabularies from iconic origins (i.e., even words like English ‘cat’, ‘dog’, and ‘girl’). But for years, it was used to dismiss discussions of any iconic aspects of language. The rise of functionalist and cognitivist schools of linguistics, with their interest in conceptual motivation, allowed a renewal of attention to iconicity in spoken languages. Studies of ‘sound symbolism’ (e.g., Hinton/Nichols/Ohala 1994), that is, cases in which the sound of a word resembles the sound of its referent, showed that onomatopoetic words are motivated but systematic and language-specific: many spoken languages have a subsystem within which words may resemble their meanings yet conform to the language’s phonological constraints (Rhodes/Lawler 1981; Rhodes 1994). On a syntactic or morphological level (e.g., Haiman 1985), the order of words in a sentence or the order of morphemes in a polysynthetic word was often found to be iconic for temporal order of events or degree of perceived ‘conceptual closeness’ (a metaphorical use of iconicity). Sign linguists, unlike spoken language linguists, never had the option of ignoring iconicity; iconicity is too pervasive in sign languages, and even a non-signing observer can immediately notice the resemblance between some signs and their meanings. The earliest attitude toward sign language iconicity (and one that many non-linguists still hold) was that sign languages were simply a kind of pantomime, a picture language, with only iconicity and no true linguistic structure (Lane 1992). Over the years, sign linguists have had to work hard to fight the entrenched myth of sign languages as pantomime. The first modern wave of sign language linguistics took two basic approaches to iconicity: strongly arguing against its presence or importance, with the goal of proving sign languages to be true languages (e.g., Hoemann 1975; Frishberg 1979; Supalla 1978, 1986, 1990); and diving into descriptions of its various manifestations, intrigued by the differences between sign and spoken languages (e.g., Mandel 1977; DeMatteo 1977). Gradually, research (e.g., Boyes-Braem 1981; Fischer 1974; McDonald 1982; Supalla 1978; Wilbur 1979) began to establish that a linguistic system constrained sign language iconicity, even the most iconic and seemingly variable signs that came to be known as classifiers (see chapter 8). For example, in ASL, one kind of circular handshape (the
18. Iconicity and metaphor M -handshape) is consistently used to trace the outlines of thin cylinders; other shapes are not grammatical. Without understanding the system, one cannot know the grammatically correct way of describing a scene with classifiers; one can only recognize that correct ways are iconic (a subset of the myriad possible iconic ways). These researchers argued against focusing on signs’ iconicity; although many signs and linguistic subsystems are clearly motivated by iconicity, linguists would do better to spend their energy on figuring out the rules for grammatically-acceptable forms. Klima and Bellugi (1979) set forth a measured compromise between the iconicity enthusiasts and detractors. They affirmed the presence of iconicity in ASL on many levels, but noted that it is highly constrained in a number of ways. The iconicity is conventionally established by the language, and not usually invented on the spot; and iconic signs use only the permitted forms of the sign language. Moreover, iconicity appears not to influence on-line processing of signing; it is ‘translucent’, not ‘transparent’, in that one cannot reliably guess the meaning of an iconic sign unless one knows the sign language already. To use their phrase, iconicity in sign languages is submerged ⫺ but always available to be brought to the surface and manipulated. Though Klima and Bellugi’s view has held up remarkably well over the years, recent research has identified a few areas in which signers seem to draw on iconicity in everyday language. We will discuss this research in section 4 below.
3. Examination of linguistic iconicity We will now look in more detail at the types of iconic structures found in languages. Our focus will be sign language iconicity; spoken language iconicity will be touched on for comparison (also see Perniss/Thompson/Vigliocco (2010) for a recent discussion of the role of iconicity in sign and spoken languages).
3.1. Comparing iconic gestures and iconic signs People use iconic representations in many communicative situations, from pictorial symbols to spontaneous gestures to fully conventionalized linguistic signs and words. In this section, we will compare iconic spontaneous gestures to iconic conventional linguistic items. Scientific research on gestures has been expanding greatly in recent years (cf. Kendon 1988; McNeill 1992; see also chapter 27). It is well established, for example, that gestures accompanying speech differ in specific ways from gestures that occur alone and carry the entire communicative message. Some gestures (called ‘emblems’ by Kendon 1988) are fully conventionalized, such as the ‘thumbs-up’ gesture indicating approval; others are created spontaneously during a communicative event. Figure 18.5 shows an example of a spontaneous iconic gesture. The woman is telling a story about a character who peeled a banana; as she says those words, her left hand configures as if she were holding the banana, and she moves her right hand downward along the left three times as if she were peeling the banana herself.
393
394
IV. Semantics and pragmatics
Fig. 18.5: Iconic gesture accompanying ‘peels the banana’
This iconic gesture is embedded in a particular discourse event; it could not be interpreted if removed from its context. The woman’s gesture represents a specific action done by a specific referent ⫺ the character’s peeling of a banana. By comparison, Figure 18.6 shows an iconic sign, the ASL sign banana. The dominant closed-X-handshape moves down the upright non-dominant @ -handshape twice, with a shift of orientation between the movements.
Fig. 18.6: The ASL sign banana
Though the sign is strikingly similar to the gesture, it is fully conventional and comprehensible in the absence of context. It represents a concept (banana, a type of fruit), not a specific action or image (a particular person peeling a banana). The gesture and the sign are similar in that they both iconically present an image of a banana being peeled. They are both based on a mapping between two conceptual structures: an imagined action and a mental model of the communicator’s body and surrounding space. These two structures are superimposed to create a composite or ‘blended’ structure (cf. Liddell 2003): the iconic sign or gesture. The differences between the gesture and the sign can be described in terms of differences between the two input structures and the resulting composite. It can also be described in terms of the producer’s intention ⫺ using the terms of Cuxac and Sallandre (2007), the ges-
18. Iconicity and metaphor turer’s intent is illustrative (i.e., to show an image), and the signer’s intent is nonillustrative (i.e., to refer to a concept). For spontaneous iconic gestures, the first structure is a specific event that the gesturer is imagining, and the second structure is a mental model of the space around the gesturer, including hands, face, and body. People who look at the gesture knowing that it is a composite of these two structures can directly interpret the gesturer’s actions as the actions taking place in the imagined event (Liddell 2003). Recent research (McNeill 1992; Morford et al. 1995; Aronoff et al. 2003) suggests that as iconic gestures are repeated, they may shift to become more like conventional linguistic items in the following ways: the gesturer’s action becomes a regular phonetic form; the imagined event becomes a schematic image no longer grounded in a specific imagined time or place; and the meaning of the composite becomes memorized and automatic, no longer created on the spot via analogy between form and image. Though the ‘peel banana’ gesture in Figure 18.5 is not the direct ancestor of the ASL sign banana, we can surmise that it resembles that ancestor and can serve to illustrate these changes. As the gesturer’s action becomes a sign language phonetic form, it conventionalizes and can no longer be freely modified. The action often reduces in size or length during this process, and may shift in other ways to fit the sign language’s phonological and morphemic system. Aronoff et al. (2003) refer to this as taking on ‘prosodic wordhood’. In our example, we see that the ‘peel banana’ gesture involves three gestural strokes, whereas the ASL sign has two strokes or syllables ⫺ a typical prosodic structure for ASL nouns. As gestures become signs, a shift toward representing objects by reference to their shapes rather than how they are manipulated has also been observed (cf. Senghas (1995) for the creolization of Nicaraguan Sign Language; also see chapter 36, Language Emergence and Creolization). In our gestural example, the non-dominant hand is a fist handshape, demonstrating how the banana is held; in ASL banana, the non-dominant handshape is an extended index finger, reflecting the shape of the banana. Our example also illustrates the shift from an imagined scene to a stylized image, in tandem with the shift from illustrative to non-illustrative intent. In ASL banana, though an image of peeling a banana is presented, it is not intended to illustrate a specific person’s action. Moreover, the sign does not denote ‘peeling a banana’; rather, it denotes the concept ‘banana’ itself. As we shall see, the images presented by iconic signs can have a wide range of types of associations with the concepts denoted by the signs. This discussion applies to iconicity in the oral-aural modality as well as the visualgestural modality. Vocal imitations are iconic in that the vocal sounds resemble the sounds they represent; spontaneous vocal imitations may conventionalize into iconic spoken-language words that ‘sound like’ what they mean (Rhodes 1994). This type of iconicity is usually called onomatopoeia. Other forms of spoken-language iconicity exist; see Hinton, Nichols and Ohala (1994) for more information. To summarize: iconic spontaneous gestures and iconic signs are similar in that both involve structure-preserving mappings between form and referent. The crucial differences are that iconic gestures are not bound by linguistic constraints on form, tend to represent a specific action at a specific time and place, and are interpreted as meaning-
395
396
IV. Semantics and pragmatics ful via an on-line conceptual blending process. In contrast, iconic signs obey the phonotactic constraints of the respective sign language, denote a concept rather than a specific event, and have a directly accessible, memorized meaning.
3.2. Classifiers: illustrative intent with some fixed components The previous section discussed how a spontaneous iconic gesture changes as it becomes a conventionally established or ‘fixed’ iconic sign. We may add to this discussion the fact that many iconic linguistic structures in sign languages are not fully fixed. In particular, the many types of spatially descriptive structures mostly known as classifiers (see chapter 8) are highly variable and involve strong iconicity ⫺ spatial characteristics of the structure (e.g., motion, location, handshape) are used to represent spatial characteristics of the event being described. Just as in spontaneous iconic gesture, the intent of the signer in these cases is illustrative (i.e., to ‘show’ a particular event or image; see Cuxac/Sallandre (2007) and Liddell (2003) for different analyses). However, classifiers differ from spontaneous gesture in that while certain components of these structures may vary to suit the needs of illustration, other components are fixed (Emmorey/ Herzig 2003; Schembri/Jones/Burnham 2005; see also sections 2.3 above and 4.1 below). These fixed components (usually the handshapes) are often iconic as well, but may not be freely varied by the signer to represent aspects of the scene. Thus, classifier constructions are like spontaneous iconic gestures in that they are intended to ‘show’ a specific mental image; some of their components, however, are conventionally established and not variable.
3.3. Types of form/image associations In cataloguing types of iconicity, we will look at the two main associations in iconic signs: the perceived similarity between the phonetic form and the mental image, and the association between the mental image and the denoted concept (see also Pietrandrea (2002) for a slightly different analysis). We will first examine types of associations between form and image. Note that both illustrative and non-illustrative structures draw on these associations (see also Liddell (2003) and Cuxac/Sallandre (2007) for slightly different taxonomies of these associations). There are many typical ways in which a signer’s hands and body can be seen as similar to a visual or motor image, giving rise to iconic representations. Hands and fingers have overall shapes and can be seen as independent moving objects. They can also trace out paths in space that can be understood as the contour of an object. Human bodies can be seen as representing other human bodies or even animal bodies in shape, movement, and function: we can easily recognize body movements that go with particular activities. Sign languages tend to use most of these types of resemblances in constructing iconic linguistic items. This section will demonstrate a few of these form/image resemblances, using examples from lexical signs, classifiers, and grammatical processes. The first type of form/image association I will call a full-size mapping. In this case, the communicator’s hands, face, and upper body are fully blended with an image of
18. Iconicity and metaphor
Fig. 18.7: The Auslan sign write
another human (or sometimes an animal). In spontaneous full-size mappings, the communicator can be thought of as ‘playing a character’ in an imagined scene. He or she can act out the character’s actions, speak or sign the character’s communications, show the character’s emotions, and indicate what the character is looking at. When full-size mappings give rise to lexical items, they tend to denote concepts that can be associated with particular actions. Often, signs denoting activities will be of this type; for example, the sign for write in ASL, SKSL, NGT, Auslan, and many other sign languages (Figure 18.7) is based on an image of a person holding a pen and moving it across paper, and ASL karate is based on stylized karate movements. In addition, categories of animals or people that engage in characteristic actions can be of this type; e.g., ASL monkey is based on an image of a monkey scratching its sides. Full-size mappings also play a part in the widespread sign-language phenomenon known as ‘role shift’ or ‘referential shift’. In role shift, the communicator takes on the roles of several different characters. Sign languages develop discourse tools to show where the signer takes up and drops each role, including gaze direction, body posture, and facial expressions (see also chapter 17, Utterance Reports and Constructed Action). Another major mode of iconic representation might be called hand-size mappings. In these, the hands or fingers represent independent entities, generally at reduced size. The hands and fingers can represent a character or animate being; part of a being ⫺ head, legs, feet, ears, etc.; or an inanimate object. Because hands move freely, but are small, allowing a ‘far-off’ perspective, hand-size mappings are ideal for indicating both the entity’s shape and its overall path through space. In section 2.2, we have already touched on a few examples of classifier handshapes representing humans; for additional examples of classifiers involving hand-size mappings, see chapter 8. Lexicalized hand-size mappings can take on a wide range of meanings associated with entities and their actions (see next section). A slight variation of this mode might be called contour mappings, in which the hands represent the outline or surface contour of some entity. It is common to have classifier forms of this sort; for example, ASL has a set of handshapes for representing cylinders of varying depth (one, two, or four fingers extended) and diameter (M, or closed circle, for narrow cylinders; :, or open circle, for wide ones; for the widest cylinders, both hands are used with :-handshapes). These forms easily lexicalize into
397
398
IV. Semantics and pragmatics
a)
b)
Fig. 18.8: a) house in SKSL, with ‘contour’ mapping vs. b) house in NGT, with ‘tracing’ mapping
signs representing associated concepts; ASL plate and picture-frame are of this type, and so is SKSL house (Figure 18.8a), which is based on an image of a typical house with a pointed roof. In a third major mode of iconic representation, here called tracing mappings, the signer’s hands trace the outline of some entity. Unlike the first two modes, in which the signer’s movement represents an entity’s movement in the imagined event or image, here movement is interpreted as the signer’s ‘sketching’ motion. This mode draws on the basic human perceptual skill of tracking a moving object and imagining its path as a whole. Most sign languages have sets of classifier handshapes used for tracing the outlines of objects ⫺ in ASL, examples include the extended index finger for tracing lines, the flat [ -handshape for tracing surfaces, and the curved M - and :-handshapes for tracing cylinders. Lexicalized examples include ASL diploma, based on the image of a cylindrical roll of paper, and NGT house (Figure 18.8b). Many more types of iconic form/image relationships are possible, including: number of fingers for number of entities; manner of movement for manner of action; duration of gesture for duration of event; and repetition of gesture for repetition of event. A detailed description is given in Taub (2001, 5). For comparison, the spoken modality is suited for iconic representations of sound images, via what we might call ‘sound-for-sound’ iconicity: spoken languages have conventional ways of choosing speech sounds to fit the pieces of an auditory image. The resulting words can be treated as normal nouns and verbs, as they are in English, or they can be separated off into a special adverb-like class (sometimes called ideophones), as in many African and Asian languages (e.g., Alpher 1994).
3.4. Types of concept/image associations We turn now to types of relationships between an iconic linguistic item’s image and the associated concept. Note that this section applies only to conventional or ‘frozen’ structures, where the signer is ‘saying without showing’ (i.e., non-illustrative intent in
18. Iconicity and metaphor Cuxac/Sallandre’s terms) ⫺ if the intent were illustrative, the signer would be ‘showing’ an image rather than referencing a concept related to that image. It is a common misimpression that only concrete, simple concepts can be represented by iconic linguistic items. On the contrary, iconic items represent a wide range of concepts ⫺ the only constraint is that there must be some relationship between the iconic image and the concept signified. Since we are embodied, highly visual creatures, most concepts have some relation to a visual, gestural, or motor image. Thus we see a wide variety of concept/image associations in sign languages, with their ability to give iconic representation to these types of images. One common pattern in sign languages is for parts to stand for wholes. If the concept is a category of things that all have roughly the same shape, sometimes the selected image is a memorable part of that shape. In many sign languages, this is a common way to name types of animals. For example, the sign cat in ASL and British Sign Language (BSL) consists of the M-shaped hand (index finger and thumb touching, other fingers extended) brushing against the signer’s cheek; the thumb and index finger touch the cheek, and the palm is directed forward. The image presented here is of the cat’s whiskers, a well-known feature of a cat’s face. If the concept is a category of physical objects that come in many sizes and shapes, sometimes the selected image is a prototypical member of the category. This is the case for the SKSL and NGT signs for house (Figure 18.8), and the various signs for tree cited in section 1: houses and trees come in many sizes and shapes, but the image in both signs is of a prototypical member of the category. For house, the prototype has a pointed roof and straight walls; for tree, the prototype grows straight out of the ground, with a large system of branches above a relatively extended trunk. Categories consisting of both physical and non-physical events can also be represented by an image of a prototypical case, if the prototype is physical. For example, the ASL verb give uses the prototypical image of handing an object to a person, even though give does not necessarily entail physically handling an object; give can involve change of possession and abstract entities as well as movement and manipulation of physical objects (Wilcox 1998). In many cases, the image chosen for a concept will be of a typical body movement or action associated with the concept. Signs denoting various sports are often of this type, as noted in section 3.3 above. Body movements can also name an object that is associated with the movement; for example, car in ASL and BSL uses an image of a person turning a steering wheel (again encoded with fist-shaped instrument classifiers). In some signs, an entire scenario involving the referent as well as other entities is given representation. ASL examples include gasoline, showing gas pouring into a car’s tank, and key, showing a key turning in a lock. Auslan write (Figure 18.7) is also of this type, showing the signer moving a pen across paper. Finally, if some physical object is strongly associated with the concept, then the image of that object may be used to represent the concept. For example, in many sign languages, the sign for olympics represents the linked-circles Olympics logo, as illustrated by signs from three different sign languages in Figure 18.9. The final type of concept/image association in sign languages is complex enough to merit its own subsection (see section 3.5 below): metaphorical iconic signs, or those which name an abstract concept using a structured set of correspondences between the abstract concept and some physical concept.
399
400
IV. Semantics and pragmatics
a)
b)
c)
Fig. 18.9: The sign for olympics in a) NGT, b) Auslan, and c) SKSL
Though iconic images in spoken languages are limited to sound images, temporal images and quoted speech, the types of concepts given iconic representation are not so limited. This is because any concept that is somehow associated with these kinds of sensory images can enter into the analogue-building process. Thus, a concept such as ‘the destructive impact of one thing into another’ can be named by the iconic English word crash, an example of onomatopoeia. This concept is not primarily an auditory one, but such impacts nearly always have a characteristic sound image associated with them. It is that sound image that receives iconic representation as crash. Then the iconic word is used to talk about the concept as a whole. Even abstract concepts that can in some way be associated with a sound image can thus be represented iconically in spoken languages (cf. Oswalt 1994) ⫺ for example, a stock market crash can be metaphorically associated with the sort of rapid descent and impact that could make a sound of this sort. It turns out, of course, that the vast majority of concepts are not closely enough associated with a sound image. For this and other reasons, iconicity is less common in spoken than in sign languages. Fewer concepts are appropriate for iconic representation in the spoken modality; and, as we saw in the previous section, there are far fewer parameters that the spoken modality can exploit. The smaller amount of iconicity in spoken languages, which has been attributed to the inferiority of iconic representations, could just as well have been attributed to the inferiority of the spoken modality in establishing iconic representations.
3.5. Iconicity linked with metaphor Conceptual metaphor is the use of one domain of experience to describe or reason about another domain of experience (Lakoff/Johnson 1980; Lakoff 1992). In spoken languages, this often manifests as the systematic use of words from the first domain (source) to describe entities in the second domain (target). For example, a phrase such as ‘We need to dig deeper’ can mean ‘We need to think more intensely’ about some topic. In sign languages, however, the situation is somewhat different, due to the linkage between metaphor and iconicity (Wilbur 1987; Wilcox 2000; Taub 2001). Here we see
18. Iconicity and metaphor
401
metaphor at work within sign languages’ lexicons: vocabulary for abstract (target) domains often consists of iconic representations of concrete (source-domain) entities. Thus, for example, in the ASL verb analyze, movements of the `-handshapes (‘bent V’) iconically show the process of digging deeper into some medium. In addition to the lexicon, the iconic classifier systems used for describing movements, locations, and shapes can be applied to the metaphorical description of abstract (non-physical) situations (see examples in Wilcox 2000); thus, this type of iconicity can be both illustrative and non-illustrative. This linkage between metaphor and iconicity is possible but rare in spoken languages; the pervasive iconicity of sign languages makes this phenomenon much more common there. Conversely, metaphor without iconicity is rare in ASL (cf. Wilbur 1990) and other sign languages (for the metaphorical use of ‘time-lines’ in sign languages, see chapter 9, Tense, Aspect, and Modality). As an example, let us consider the domain of communication (also see Wilcox 2000). Many languages have a metaphor ‘communication is sending’ (e.g., Reddy 1979; Lakoff/Johnson 1980) where successful communication is described as successfully sending an object to another person. In ASL, a large set of lexical signs draw on this metaphor, including signs glossed as inform, communicate, miss, communicaton-breakdown, itwent-by-me, over-my-head, and others. Brennan (1990) has documented a large set of signs in BSL that draw on the same metaphor as well. We shall see that these signs involve two conceptual mappings: one between target and source conceptual domains, and one between source-domain image and phonetic form (Taub 2001). In the ASL sign think-penetrate (Figure 18.4 above), the dominant @-handshape begins at the temple and travels toward the locus of the verb’s object. On the way, it encounters the non-dominant hand in a flat v-handshape, palm inward, but the index finger penetrates between the fingers of the flat hand. If this sequence were to be
Tab. 18.1: Iconic mapping for think-penetrate ARTICULATORS
SOURCE
1->CL (@) Forehead 1->CL touches forehead 1->CL moves toward locus of addressee non-dominant B-CL (v) 1->CL inserted between fingers of B-CL signer’s locus addressee’s locus
an object head object located in head sending an object to someone barrier to object penetration of barrier sender receiver
Tab. 18.2: Iconic mapping for drill ARTICULATORS
SOURCE
dominant L-handshape (A)
long thin object with handle (in particular, a drill) flat surface penetration of surface
non-dominant B-CL (v) L inserted between fingers of B-CL
402
IV. Semantics and pragmatics Tab. 18.3: Double mapping for think-penetrate Iconic Mapping
Metaphorical Mapping
ARTICULATORS
SOURCE
TARGET
1->CL Forehead 1->CL touches forehead 1->CL moves toward locus of addressee non-dominant B-CL 1->CL inserted between fingers of B-CL signer’s locus addressee’s locus
an object head object located in head sending an object to someone
an idea mind; locus of thought idea understood by originator communicating idea to someone difficulty in communication success in communication despite difficulty originator of idea person intended to learn idea
barrier to object penetration of barrier sender receiver
interpreted as a classifier description, it would denote a long thin object (the index finger or ‘1->’) emerging from the head, moving toward a person, encountering a barrier, and penetrating it. Table 18.1 spells out this iconic mapping between articulators and concrete domain. It is useful to contrast think-penetrate and ASL drill (Figure 18.3 above), a sign derived from lexicalized classifiers. In drill, the dominant hand assumes a A-handshape, with index finger and thumb extended; the non-dominant hand again forms a v-handshape. The index finger of the A-hand penetrates between the fingers of the vhand. The image chosen to stand for the piece of equipment known in English as a ‘drill’ is that of a long thin object (with a handle) penetrating a surface; the A, of course, iconically represents the long thin object (or drill), and the flat hand represents the surface pierced by the drill. This is a case of pure iconicity. The iconic mapping is given in Table 18.2. Unlike drill, think-penetrate does not describe a physical scene. Its actual meaning can be translated as ‘to get one’s point across’ or ‘for someone to understand one’s point’. When we consider as well signs such as i-inform-you, think-bounce, over-myhead, and it-went-by-me, all of which resemble classifier descriptions of objects moving to or from heads and pertain to communication of ideas, we have strong evidence for a metaphorical mapping between the domains of sending objects and communicating ideas. Thus, think-penetrate involves two mappings: an iconic mapping between articulators and source domain, and a metaphorical mapping between source and target domains. In Table 18.3, we can see how each articulatory element of think-penetrate corresponds to an element of the domain of communication, via the double mapping. The signer’s location corresponds to the communicator’s location; the index finger corresponds to the information to be communicated; the movement of the index finger from signer toward the syntactic object’s location in space corresponds to the communication of that information to an intended recipient; the flat hand represents a difficulty in communication; and finally, penetration of the flat hand represents success in communication despite the difficulty. Signs that share a metaphorical source/target mapping need not share an iconic source/articulators mapping. The classifier system of ASL provides several iconic ways
18. Iconicity and metaphor to describe the same physical situation, and all of these ways can be applied to the description of a concrete source domain. For example, consider the sign i-inform-you, where closed flat-O-handshapes begin at the signer’s forehead and move toward the addressee’s location, simultaneously opening and spreading the fingers. This sign does not have a physical articulator corresponding to the idea/object; instead, the flat-O classifier handshapes iconically represent the handling of a flat object and the object itself is inferred. Nevertheless, in both i-inform-you and think-penetrate, the moved object (regardless of its representation) corresponds to the notion of an idea. This suggests that the double-mapping model is a useful way to describe metaphorical/iconic phenomena in sign languages: a single-mapping model, which described signs in terms of a direct mapping between articulators and an abstract conceptual domain, would miss what think-penetrate and i-inform-you have in common (i.e., the source/ target mapping); it would also miss what think-penetrate and drill have in common (i.e., the fact that the source/articulators mappings are much like the mappings used by the sign language’s productive classifier forms). We may note that metaphorical/iconic words and constructions also exist in spoken languages, and can be handled with a double mapping and the analogue-building process in the same way as metaphorical/iconic signs. Some examples of metaphorical iconicity in English include lengthening to represent emphasis (e.g., ‘a baaaad idea’; cf. Okrent 2001, 187 f.), and temporal ordering to represent order of importance (e.g., topic/ comment structures such as ‘Pizza, I like’; cf. Haiman 1985).
3.6. Partially iconic structures: temporal iconicity Drawing on the definition of iconicity as a structure-preserving mapping between form and image associated with meaning, we find many lexical items, syntactic structures, and other linguistic structures that are partially iconic. In these cases, only some aspects of each sign are iconically motivated; thus, unlike the iconic items discussed above, they do not present a single consistent iconic image. We only have space to look at one type of partial iconicity: the case of temporal iconicity, where morphological and syntactic structures whose temporal structure is related to their meaning are superimposed on non-iconic lexical material. Other partially iconic phenomena include: lexical items for which different aspects of the sign are motivated by different iconic/metaphorical principles (Taub 2001, 7); sign language pronoun systems, which are partially iconic and partially deictic (see chapter 11, Pronouns); and metaphorical/iconic use of locations in signing space to convey notions of relative power and affiliation (see chapter 19, Use of Sign Space). For the most part, these phenomena are not consistent with illustrative intent. Temporal iconicity is fairly common in both sign and spoken language temporal aspect systems. One common example is the use of reduplication (i.e., the repetition of phonetic material) in morphological structures denoting repetition over time (see, e.g., Wilbur 2005). Many sign languages have a much more extensive use of iconicity in their temporal aspect systems, in that the temporal structure of most aspectual inflections reflects the temporal structure of the event types they describe. Consider, for example, the ASL protracted-inceptive (PI) inflection (Brentari 1996). This inflection can occur on any telic verb; it denotes a delay between the onset of the
403
404
IV. Semantics and pragmatics
Fig. 18.10: Structure-preserving correspondences between the temporal structure of a) a situation where a person is delayed but eventually leaves and b) the sign leave inflected for PI.
verb’s action and the accomplishment of that action ⫺ in effect, a ‘protracted beginning’ of the action. PI’s phonetic form involves an extended hold at the verb’s initial position, while either the fingers wiggle (if the handshape is an open you before “shakes head” “shakes head”
you art ugh not-really but deaf
school you--- mean true-- --------------hey! deaf only
true hearing
me teacher art that’s-it horrible
(xxx) hearing
you’re right only “nods firmly”
that’s it odd
teacher mohican handle-bar-moustache long-hair white clothes mohican
thick-cloth like i-dunno different their way linked art love strange.
Free translation: Tanya: That’s interesting. Maths. You see.
498
IV. Semantics and pragmatics Nancy: Trish: Frances: Tanya: Frances: Trish: Frances: Trish: Tanya: Trish: Tanya:
You see! It’s strange. He was clever but crap. It makes you wonder why … Me too. I had an art teacher who was similar. Yes, similar, with his odd clothes. That’s artists for you. Odd and wear odd clothing. Odd like you then, Trish. Yes, that’s why you’re odd. You went to art school, didn’t you Trish? Yes, I did art but I left.
Not really, but he was deaf. Hey, that’s the point, he was deaf. Only the hearing teachers were odd. Yes, you’re right, that’s true. I had an art teacher = = with a horrible Mohican cut. Yes, he had a Mohican cut and a handle-bar moustache. He had long hair and wore white clothes of some thick cloth. I dunno. There’s something different about art teachers. It must be because of their love of art. They are very strange.
An example of such an overlap is where Frances has only just begun to ask the question about Trish going to art school (7e), when Trish starts to answer it. In (7g/h) Tanja mentions her art teacher and immediately Trish adds further information about him, namely his horrible Mohican hairstyle. It is suggested that overlaps in signing do not create interference in discourse as has been suggested for overlapping speech. Rather, such overlaps have been argued to have their base in the establishment of solidarity and connection (see the discussion of Hoza (2007) in section 7). However, spoken languages also vary and the same argument could be made for languages such as Spanish and Hebrew where considerable overlap is allowed. We do not yet know how much variation there is between sign languages in terms of overlap allowed. Children have to learn the turn-taking patterns of their language community. In a study of children learning NGT, Baker and van den Bogaerde (2005) found that children acquire the turn-taking patterns over a considerable number of years. In the first two years, there is quite some overlap between the turns of the deaf mother and her child. This seems to be related to the child starting to sign when the mother is still signing. Around the age of three, both children studied showed a decrease in the amount of overlap, which is interpreted as an indication of the child learning the basics of turn-taking. However, in the deaf child with more advanced signing skills, at age six the beginnings of collaborative floor are evident with the child using overlaps to contribute to the topic and to provide feedback. Interpreters have been found to play an important role in turn-taking where signing is involved. A major point of influence lies in the fact that they often need to identify the source of utterance to be interpreted for the other participants (Metzger/Fleetwood/Collins 2004). In ASL conversations, the interpreters identified the source by pointing, body shift, using the name sign, or referring to the physical appearance of the source, either individually or in combination: the more complex the situation, the more likely an explicit combination of strategies. Thus, body shift was most common in interpreting dyadic conversations, whereas a combination of body shift, pointing, and use of name sign occurred more often in multi-party conversations. Source attribution does not always occur, but is quite common in multi-party conversations and
22. Communicative interaction reaches the 100 % level in deaf-blind conversations where the mode of communication is visual-tactile (see chapter 23, Manual Communication Systems: Evolution and Variation). A signer wishing to contribute to a multi-party conversation has to indicate his/ her desire to take the turn. This usually requires mutual eye gaze between the person already signing and the potential contributor. If the interaction is being interpreted, this process is more complex since the person wishing to sign also has to take into account the hearing participants and therefore has to keep an eye on the interpreter for possible contributions from them. In the case of meetings, the chairperson plays a crucial role here (Van Herreweghe 2002), as will be discussed further in section 8.
5. Coherence and cohesion According to the Co-operation Principle of Grice (section 2), it is important to be efficient in the presentation of information (maxim of quantity) and to indicate how utterances are related to one another (maxim of relevance). Creating coherence and cohesion within a discourse is thus essential. Participants in a conversation also need to create coherence and cohesion between various contributions. In sign languages, this is achieved by using a number of devices, some of which appear to be modality-specific. Reference is an important means of creating cohesion. As in spoken languages, signers can repeat lexical items that they either have previously produced themselves or that have been produced by another signer, thereby creating lexical cohesion. In the BSL conversation in (7), the participants repeat both their own lexical items but also those of others. For example, in (7d) the sign odd is produced first by Trish, and is then repeated by Frances and Tanya. In ASL, it has been observed that referents can be introduced by use of a carefully fingerspelled word and later referred to by use of a rapidly fingerspelled version (Metzger/Bahan 2001). Referents that are going to occur frequently in the discourse are often assigned a fixed location in the hemispheric space in front of the signer (see chapter 19, Use of Sign Space). This location can subsequently be pointed to with the hand and/or the eyes in order to establish anaphoric reference, thus creating cohesion. Clearly, this strategy is efficient as it follows Grice’s maxim of quantity of information. Movement of agreeing signs that target these locations as well as the use of person and object classifiers and list buoys are also means of creating anaphoric reference (see chapter 8 on verb agreement and chapter 9 on classifiers). Spatial mapping, as this use of space is called, plays a major role in creating coherent discourse structures (Winston 1991). Children take some time to learn to use these devices (Morgan 2000) and occasionally overuse full nouns instead of using anaphoric reference, as also observed in children acquiring a spoken language. There is a second type of space which is also commonly used to create cohesion and coherence. The signer can use his own body as a shifted referential location in order to describe the interaction of characters and the course of events (Morgan 2002, 132). The perspective of one of the participants can thus be portrayed. This is also known as role-shift, perspective shift (Lillo-Martin 1995), or constructed action (Metzger 1995) (see chapter 17 for discussion). In the Jordanian Sign Language example in Figure 22.3 (Hendriks 2008, 142), the signer first takes the perspective of Sylvester, the
499
500
IV. Semantics and pragmatics
Fig. 22.3: Role shift in re-telling of a Tweety cartoon in Jordanian Sign Language (Hendriks 2008, 142)
cat, to illustrate how he looks at Tweety, the bird, through binoculars (Figure 22.3a); then she switches to the perspective of Tweety, who does the same (Figure 22.3b). In the same story, the signer again takes on the role of Tweety and uses her own facial expressions and movements (e.g. looking around anxiously) to tell the story. By using these two types of spaces, either separately or overlapping, cohesion within the discourse is established. Longer stretches of discourse can be organized and at the same time linked by use of discourse markers. Roy (1989) found two ASL discourse markers, now and now-that, that were used in a lecture situation to divide the lecture into three parts, viz. the introduction, the body of the lecture, and the conclusion. The sign on-to-the-nextpart marking a transition was also found. For Danish Sign Language (DSL), a manual gesture has been described that has several different discourse functions (EngbergPedersen 2002). It appears to be used, for example, for temporal sequencing, evidentiality, and (dis)confirmation. Engberg-Pedersen calls this gesture the ‘presentation gesture’ since it imitates the hand movement of someone holding up something for the other to look at. The hand is flat and the palm oriented upwards. In the example in (8), adapted from Engberg-Petersen (2002, 151), the presentation gesture is used to link the two sentences (8a) and (8b).
(8)
a. b.
y/n index1 ask want look-after index3a [presentation gesture] / [DSL] indexforward nursery-school strike [presentation gesture] / ‘I asked, “Would you look after her, since the nursery school is on strike?”’
Similar discourse markers exist in other sign languages, such as Irish Sign Language and New Zealand Sign Language (McKee/Wallingford 2011), but some sign languages, such as, for example, German Sign Language, seem not have them (Herrmann 2007; for a discussion of such markers in the context of grammaticalization, see Pfau/Steinbach (2006)). An aspect related to coherence is when a correction is necessary because a mistake has been made or the utterance was unclear, the so-called conversational repairs (Schegloff/Jefferson/Sacks 1977). There are many possible types of repair such as self-initiated repair, self-completed repair, other-initiated repair, other-completed repair, and
22. Communicative interaction word search. Repairs initiated by others can result from an explicit remark, such as What do you mean?, or from non-verbal behavior indicating lack of comprehension. There is hardly any research on repairs in signed interaction. Dively (1998) is one of the few studies on this aspect; it is based on material from ethnographic interviews with three deaf ASL signers. A specific characteristic of signed repairs identified by Dively was the use of simultaneity. Firstly, non-manual behaviors such as averting eye gaze and turning the head away from the addressee were used to indicate a search for a lexical item on the part of the signer. Furthermore, it was possible to sign with the one hand, for example, a first person pronoun on the right hand, and indicate the need for repair with the other hand, in this case by means of the sign wait-a-minute (Dively 1998, 157).
6. Narratives Storytelling is important in all cultures, but in those that do not have a writing system for their language, it often plays a central part in cultural life. Sign languages do not have a (convenient) written form (see chapter 43, Transcription); thus in many Deaf cultures, storytelling skills are as highly valued as they are in spoken languages without a written from. A good storyteller in ASL, for instance, can swiftly and elegantly change between different characters and perspectives, which includes close-ups and long shots from every conceivable angle (Mindess 2006, 106). Several narrative genres have been described for sign languages: jokes, tales about the old days, stories of personal experience, legends, and games, all of which are also known in spoken languages. Some genres seem to be specific to sign languages. Mindess gives examples of ABC and number stories (Carmel 1981, in Mindess 2006, 106). In such stories, the ASL handshapes representing the letters A⫺Z or the figures 0⫺9 are used for concepts and ideas. Signs are selected such that sequences of handshapes create certain patterns. These stories are for fun, but also for children to learn the alphabet and numbers. Nowadays many can be found on YouTube on the internet (e.g. ASL ABC Story!). A typical joint activity, group narrative, is described by Rutherford (1985, 146): In a group narrative each person has a role or roles. These can be as characters in the story or as props. The story line can be predetermined, or it can occur spontaneously. The subject matter can range from actually experienced events (e.g. a family about to have a baby) to the borrowed, and embellished, story line of a television program or movie. They can be created and performed by as few as two to as many as ten or twelve; two to six participants being more common. Most important, these narratives are developed through the use of inherent elements of ASL. Though they make use of mime and exaggerated performance, as does adult ASL storytelling, they are, like the narratives of hearing children, sophisticated linguistic expressions.
Other forms of ASL narrative, i.e. folklore, are fingerspelling, mime, one-handshape stories, and skits (Rutherford 1993, in Mindess 2006, 106). Languages vary in the way they package information in a certain type of discourse, but all speakers or signers have to deal with linguistic aspects at the sentence and story level, to take into account information needs of the addressee(s), and to sequence
501
502
IV. Semantics and pragmatics large amounts of information (Morgan 2006, 315; Becker 2009). The structure of a narrative has many modality-independent aspects such as creating the setting, the plot line, and providing emotive responses. The devices that the narrator has at his disposal depend on the language. In shaping the story and keeping it interesting and understandable, the narrator has a range of narrative devices to choose from, such as shifting from the point of view of the narrator to that of one of the participants, creating little detours by introducing subtopics, or making use of dramatic features like changes in intonation or loudness of voice. In sign languages, aspects like facial expression and use of the body play an important role at different levels in the narrative. In signed narratives, modality-specific linguistic devices are used to organize and structure the story (Sutton-Spence/Woll 1999, 270⫺275), such as spatial mapping (e.g. Winston 1995; Becker 2009), eye gaze behavior (Bahan/Supalla 1995), or the use of one or two hands (Gee/Kegl 1983). Besides these modality-specific devices, more general strategies like the use of discourse markers, the choice for particular lexical items, or pausing (Gee/Kegl 1983) are also observed. Gee and Kegl studied pause-structure in relation to story-structure in ASL and found that the two structures almost perfectly correlate: the longest pauses indicate the skeleton of the story (introduction, story, conclusion) while the shortest pauses marked units at the sentence level. Bahan and Supalla (1995) looked at eye gaze behavior at the sentence level and found two basic types, namely gaze to the audience and characters’ gaze, which each serve a different function. Gaze to the audience indicates that the signer is the narrator. When the signer is constructing the actions or dialogue of one of the protagonists in the story, he will not look at the conversational partner(s) but at the imagined interlocutor (Metzger/Bahan 2001, 141). Pauses in combination with eye gaze direction scaffold, as it were, the story.
7. Pragmatic adequacy Edward Hall (1976) distinguished high and low context cultures. In a high context culture, people are deeply involved with each other, information is widely shared, and there is a high dependence on context. In other words, if you do not share the same cultural experience as everyone else, you might not understand what is going on in any given conversation. In contrast, in a low context culture, people are less involved with each other, more individualistic, and most information is made explicit. Deaf communities have been described as being high context cultures (Mindess 2006, 46 f.), and this is said to be reflected in the communicative style. American Deaf signers, for instance, have been described as being more direct than hearing speakers (Mindess 2006, 82 ff.). Here we will first discuss register with respect to the formality of interactions (section 7.1), then politeness and taboo (7.2), and finally humor (7.3).
7.1. Register The formality of the situation has an impact on different aspects of signing. Formal signing is characterized by enlarged signs and slower signing (Baker-Shenk/Cokely
22. Communicative interaction
Fig. 22.4: Two girls whispering hiding their signing using their coats (Jansma/Keppels 1993)
1980). A different handshape may be used in different registers or contexts based on formality; as is discussed in Baker and van den Bogaerde (2008) and in Berenz (2002), the [-hand is used in formal settings as an index instead of the point. Informal signing, on the other hand, shows more assimilation between signs, centralization of locations, e.g. locations on the head being realized lower in space, and the size of the movement is reduced. Two handed-signs are often reduced to articulation with one hand (Schermer et al. 1991). Besides these phonological aspects, lexical choice may also be influenced (Crasborn 2001, 43). Russo (2004) has also related register to the amount of iconicity used ⫺ the more formal the situation, the less iconicity. Shouting and whispering can be appropriate behaviors in certain situations. The forms used for shouting and whispering have been described for sign languages (Crasborn 2001, 196, 199⫺201; Mindess 2006, 26). Shouting is usually characterized by bigger signs and slower movements. However, if shouting occurs in anger, then movements may in fact be accelerated but the signing is then accompanied by an angry, exaggerated facial expression. In the whispering mode in ASL, signs that are normally articulated on the face and body can be displaced towards a location to the side, below the chest, or to a location that cannot easily be observed (Emmorey/McCullough/Brentari 2003, 41). Deaf children whispering in sign language have been observed to either hide their strong hand behind an object or to block the view of the strong hand using their weak hand. In the school playground, children have been seen to hide the hands of another child signing under their coat, as shown in Figure 22.4 (Jansma/Keppels 1993).
7.2. Politeness and taboo What is considered to be polite behavior differs per country, culture, language, and situation. Hall (1989) and Mindess (2006) investigated politeness in ASL (see also Roush 2011). They report that it is impolite in ASL to impair communication, for
503
504
IV. Semantics and pragmatics example, by holding someone’s hands to stop them signing or by turning your back on someone while they are signing. This can be mitigated by using signs that Hall (1989, 95) glosses as time-out or one-five, so that the interlocutor knows that the conversation will be briefly interrupted. As for the interaction of hearing with deaf people, Mindess found that talking in a way that deaf people cannot speech-read or answering the phone without explanation are perceived as impolite. In ASL, it is also considered a taboo to inquire about the addressee’s hearing loss, ability to speak, feelings at missing music, and so on. Mindess also describes how to pass between two people having a signed conversation, thereby potentially interrupting their conversation. The best way, she says, is to just walk right through and not attract attention to yourself, perhaps with very tiny articulation of the sign excuse-me. Hearing people unfamiliar with the Deaf way, not wanting to be rude, often behave in exactly the opposite way: they extensively apologize and in that way disrupt the conversation to a much larger extent. Hoza (2007) recently published a more extensive study of politeness in ASL, in which he applies the general politeness schema of Brown and Levinson (1987). His study revealed that politeness forms in ASL are different from those used in spoken English. This has its roots, according to Hoza (2007, 208), in a different culture-base: the culture of the American Deaf community being based on involvement, in contrast to the majority culture which is based on independence, that is, the desire not to impose. This explains why signs such as please and thank-you are used less and, when used, also differently in ASL as compared to spoken English. In this way, he also accounts for the finding that English speakers are more indirect in their speech than ASL signers (see section 3). Like Mindess (2006, 84), Hoza found that Deaf Americans are more direct than hearing Americans who use English. However, they still use indirect forms if politeness requires this (see also Roush 1999; Nonhebel 2002; Arets 2010 on NGT). Hoza includes the concept of face in his analysis: “Face can be understood to be of two kinds: (a) the desire to be unimpeded in one’s actions and (b) the desire to be approved of” (Hoza 2007, 21). In his study, the term involvement is used to describe the type of politeness that is associated with showing approval and camaraderie, and the term independence to describe the type of face associated with not wanting to impose (2007, 22). Hoza identified the following five non-manual markers associated with politeness strategies in formulating requests and rejections in ASL (Hoza 2007, 185). ⫺ polite pucker (similar in form to the adverbial marker mm, which conveys the sense of normally or as expected): expresses a small imposition and cooperation is assumed; it has involvement function only; tight lips ⫺ appears to be a general default politeness marker for most requests of moderate imposition (p. 141) and has both involvement and independence function; pg ⫺ polite grimace: expresses significant threats to both involvement and independence (p. 149); pg-frown ⫺ polite grimace-frown: is associated with a severe imposition, both in involvement and independence (p. 162); bt ⫺ body/head teeter: indicates extreme threats to both involvement and independence in one of two ways. When it co-occurs with other non-manual markers, it intensifies these markers. When it appears without a non-
pp
22. Communicative interaction
505
manual marker, it questions the possibility of compliance with a request or the possibility of an option working out (p. 178). The introduction of new technologies, like texting via smartphones, video-messages, and Skype-connections with visual contact between callers, is also challenging “one of the most basic tenets of Deaf culture: the primacy of face-to-face interactions” (Mindess 2006, 151). Mindess states that in any situation where signers are dependent on eye contact and responsive back-channeling for mutual understanding, it is “terribly distracting to see participants’ heads bobbing up and down as they continually glance at their pagers” (Mindess 2006, 152). Clearly, the unspoken rules of Deaf behavior are challenged here (e.g. eye contact) and new rules need to be developed for pragmatically adequate behavior. For some aspects of languages, there are strong taboos but these depend very much on the individual culture. Some signs indicating body parts or taboo signs cannot be made in all social contexts in some cultures. Body parts are often referred to by pointing to the actual limb or area (of the organ) but Pyers (2006) found that there are cultural taboos that influence which body parts can be indexed. For instance, in North American society it is considered socially inappropriate to point to genitals. ASL respects this taboo and consequently, signs for genitalia have been lexicalized, so that the actual location of the genitals on the body is not involved (Pyers 2006, 287). Nowadays, it is very easy to look up taboo signs or signs considered to be too explicit to use in all contexts on the internet. For instance, there is a veritable archive of taboo signs in the form of short clips to be found on YouTube. Apparently, however, these signs are not taboo to the extent that people refuse to be associated with them on the internet ⫺ the young people who demonstrate these signs are clearly recognizable in the video clips. Interestingly such taboo signs are used by patients who suffer from Gilles de la Tourette’s syndrome just as taboo words are used by hearing Tourette’s patients (Morris et al. 2000; Dalsgaard/Damm/Thomsen 2001). Some cultural gestures used together with spoken languages are sometimes adopted into sign languages; however, the register of such cultural gestures in the spoken language is not always perceived by deaf signers. This can lead to communication problems in interaction between hearing and deaf people. As Pietrosemoli (2001) reports, the sign for ‘having sex’ is a taboo cultural gesture in Spanish. This gesture is onehanded with a d-handshape and palm orientation to the body. If this gesture is used in Venezuelan Sign Language, it can be quite inappropriate as (9), adapted from Pietrosemoli (2004, 170), illustrates. (9)
biology index1pl study now plants [having sex] how ‘In biology we are now studying how plants fuck.’
[Venezuelan SL]
Where signs have a resemblance to taboo cultural gestures, language change can take place to avoid problems of inappropriate use (Pietrosemoli 1994).
7.3. Humor Another aspect that is culturally defined is the use of humor. It takes firm knowledge of the culture of a group of people and appropriate pragmatic skills to be able to
506
IV. Semantics and pragmatics decide whether or not a joke or a pun can be made in a particular situation. Deaf humor is often based on the shared experience of deaf people (Sutton-Spence/Woll 1998, 264), as is humor in all cultures. Klima and Bellugi (1979) first described the sign plays and humor used in ASL. Bienvenu (1994) studied how humor may reflect Deaf culture and came up with four categories on which Deaf humor is based: the visual nature of humor; humor based on deafness as an inability to hear; humor from a linguistic perspective; and humor as a response to oppression (Bienvenu 1994, 17). Young deaf children used to learn early in life, in the deaf schools, how to imitate their friends or teachers ⫺ not with the aim to insult them, but as a form of entertainment. Imitation is still a favorite pastime, especially in international settings (Bouchauveau 1994) where storytelling is used to exchange information about, for instance, differences between countries. The inability to hear also provides the basis for well-known jokes, where the use of noise or sounds will identify the hearing, thus also identifying the deaf, who are not reacting. Linguistic jokes include riddles, puns, and sign games, for example, changing the sign understand to little-understand by using the pinkie finger instead of the index finger (Klima/Bellugi 1979, 324). A signer can express in sign that s/he is oiling the joints in the hands and arms with a small oil can, to indicate that s/he is preparing for a presentation in sign language (Sutton-Spence/Woll 1998, 266). One thing is clear about humor ⫺ it is necessary to know the culture, and the context in which humor is used, to be able to appreciate it. Hearing people often miss the point in signed jokes or puns, just as deaf people often do not appreciate hearing spoken humor, not only because the linguistic finesse is lacking, but also because there is a lack of knowledge about each other’s culture.
8. Influence of cultural/hearing status When interacting with each other, hearing and deaf people can use a signed or a spoken language, or a form of sign-supported speech. The choice for a particular language mode certainly depends partly on the hearing status of the participants and partly on their fluency in the language(s) involved. But it is not hearing status and fluency in a language alone that ultimately decide in what form communication will take place. What is decisive is the attitude a person has towards Deafness and sign language and her/his general outlook and views on life in combination with personal experience and skills. Young, Ackerman, and Kyle (2000) explored the role of hearing status in the interaction between deaf and hearing employees. Deaf people associated the use of sign language with personal respect, value, and confidence, and hearing colleagues’ willingness to sign was considered more significant than their fluency. Hearing employees connected sign language use to change, pressure, and the questioning of professional competence. In order to improve relations, the deaf perceived the challenges involved as person-centered, meaning that they wanted to be involved, make relationships, and feel good in the working environment. In contrast, the hearing participants were found to be more language-centered, that is, they struggled with how well, how confidently, and how consistently they could sign. In other words: whereas for the deaf people, the
22. Communicative interaction willingness of hearing people to sign was paramount, for hearing people themselves, the standard to which they signed was the most important (2000, 193). In a study on procedures during mixed deaf-hearing meetings, Van Herregweghe (2002) was able to demonstrate that the choice of a deaf or a hearing chairperson, and subsequently the choice for sign language or spoken language as main language in the meeting, had far-reaching consequences for the participation of the deaf in the flow of conversation and thus in the decision-taking process. In communicative interaction involving sign language, the cultural stance people take seems to have more impact on the linguistic choices and possibilities than their hearing status. Even so, being hearing or deaf does have some consequences, for example, for the perception of signs. Deaf people are found to have better peripheral vision than hearing people. They regularly scan their surroundings to compensate for the absence of acoustic cues and typically monitor the arm and hand motions with peripheral vision while looking at a conversational partner’s eyes (Bavelier et al. 2000). Even hearing children of deaf parents (Codas) who are native signers make different use of their visual and auditory cortex than deaf born individuals due to the fact that they can hear (Fine et al. 2005). Their bilingualism (for instance, in English and ASL) is different from deaf bilinguals who use the same languages. In what way the more acute peripheral vision of deaf native signers influences signed or spoken interaction, with either deaf or hearing participants, is not yet known.
9. Conclusion In the previous sections, we have described various aspects of interaction involving a sign language. With respect to many, in fact almost all, of the relevant aspects, to date no, or relatively little, research has been carried out. Most of the available studies focus on ASL but in many cases, it is not clear wether the results found for one sign language can be transferred to another. In areas such as the Gricean maxims, it seems likely that there are universal principles but, again, almost no research has investigated this topic from a sign language perspective. On the other hand, in other areas, we can anticipate considerable differences between sign languages. In turn-taking, for example, it is known that spoken languages differ greatly in the signals they use and the patterns observed. It can thus be expected that sign languages will show a similar amount of variation. Clearly, there is still considerable work to be done.
10. Literature Arets, Maureen 2010 An (Im)polite Request. The Expression of Politeness in Requests in Sign Language of the Netherlands. MA Thesis, University of Amsterdam. ASL ABC Story! http://www.youtube.com/watch?v=qj1MQhXfVJg (Accessed on 01/11/09). Bahan, Ben/Supalla, Sam 1995 Line Segmentation and Narrative Structure: A Study of Eye-gaze Behavior in American Sign Language. In: Emmorey, Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 171⫺191.
507
508
IV. Semantics and pragmatics Baker, Anne/Bogaerde, Beppie van den 2005 Eye Gaze in Turntaking in Sign Language Interaction. Paper Presented at the 10th International Congress for Study of Child Language, Berlin, July 2005. Baker, Anne/Bogaerde, Beppie van den 2008 Interactie en Discourse [Interaction and Discourse]. In: Baker, Anne/Bogaerde, Beppie van den/Pfau, Roland/Schermer, Trude (eds.), Gebarentaalwetenschap. Een Inleiding [Sign Linguistics. An Introduction]. Deventer: Van Tricht, 83⫺98. Baker, Charlotte 1977 Regulators and Turn-taking in American Sign Language Discourse. In: Friedman, Lynn A. (ed.), On the Other Hand. New York: Academic Press, 218⫺236. Baker-Shenk, Charlotte/Cokely, Dennis 1980 American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Silver Spring, MD: TJ Publishers. Bavelier, Daphne/Tomann, Andrea/Hutton, Chloe/Mitchell, Teresa/Corina, David/Liu, Guoying/ Neville, Helen 2000 Visual Attention to the Periphery Is Enhanced in Congenitally Deaf Individuals. In: Journal of Neuroscience 20(RC93), 1⫺6. Becker, Claudia 2009 Narrative Competences of Deaf Children in German Sign Language. In: Sign Language & Linguistics 12(2), 113⫺160. Berenz, Norine 2002 Insights into Person Deixis. In: Sign Language & Linguistics 5(2), 203⫺227. Bienvenu, Martina J. 1994 Reflections of Deaf Culture in Deaf Humor. In: Erting, Carol J./Johnson, Robert C./ Smith, Dorothy L.S./Snider, Bruce D. (eds.), The Deaf Way, Perspectives from the International Conference on Deaf Culture. Washington, DC: Gallaudet University Press, 16⫺23. Bouchauveau, Guy 1994 Deaf Humor and Culture. In: Erting, Carol J./Johnson, Robert C./Smith, Dorothy L. S./ Snider, Bruce D. (eds.), The Deaf Way, Perspectives from the International Conference on Deaf Culture. Washington, DC: Gallaudet University Press, 24⫺30. Bogaerde, Beppie van den 2000 Input and Interaction in Deaf Families. PhD Dissertation, University of Amsterdam. Utrecht: LOT. Brown, Penelope/Levinson, Stephen 1987 Politeness: Some Universals in Language Usage. Cambridge, MA: Cambridge University Press. Campbell, Cindy 2001 The Application of Speech Act Theory to American Sign Language. PhD Dissertation, University at Albany, State University of New York. Celo, Pietro 1996 Pragmatic Aspects of the Interrogative Form in Italian Sign Language. In: Lucas, Ceil (ed.), Multicultural Aspects of Sociolinguistics in Deaf Communities. Washington, DC: Gallaudet University Press, 132⫺151. Coates, Jennifer/Sutton-Spence, Rachel 2001 Turn-taking Patterns in Deaf Conversation. In: Journal of Sociolinguistics 5, 507⫺529. Coerts, Jane 1992 Nonmanual Grammatical Markers. An Analysis of Interrogatives, Negations and Topicalisations in Sign Language of the Netherlands. PhD Dissertation, University of Amsterdam. Crasborn, Onno 2001 Phonetic Implementation of Phonological Categories in Sign Language of the Netherlands. PhD Dissertation, University of Leiden. Utrecht: LOT.
22. Communicative interaction Dalsgaard, Søren/Damm, Dorte/Thomsen, Per 2001 Gilles de la Tourette Syndrome in a Child with Congenital Deafness. In: European Child & Adolescent Psychiatry 10, 256⫺259. Dively, Valery L. 1998 Conversational Repairs in ASL. In: Lucas, Ceil (ed.), Pinky Extension and Eye Gaze. Language Use in Deaf Communities. Washington, DC: Gallaudet University Press, 137⫺169. Emmorey, Karen/McCullough, Stephen/Brentari, Diane 2003 Categorical Perception in American Sign Language. In: Language and Cognitive Processes 18(1), 21⫺45. Engberg-Pedersen, Elisabeth 2002 Gestures in Signing: The Presentation Gesture in Danish Sign Language. In: Schulmeister, Rolf/Reinitzer, Heimo (eds.), Progress in Sign Language Research: In Honor of Siegmund Prillwitz. Hamburg: Signum, 143⫺162. Fine, Ione/Finney, Eva M./Boynton, Geoffrey M./Dobkins, Karen M. 2005 Comparing the Effects of Auditory Deprivation and Sign Language Within the Auditory and Visual Cortex. In: Journal of Cognitive Neuroscience 17(10), 1621⫺1637. Gee, James P./Kegl, Judy A. 1983 Narrative/Story Structure, Pausing and American Sign Language. In: Discourse Processes 6, 243⫺258. Grice, Paul 1975 Logic and Conversation. In: Cole, Peter/Morgan, Jerry L. (eds.), Studies in Syntax and Semantics III: Speech Acts. New York: Academic Press, 183⫺198. Hall, Edward 1976 Beyond Culture. Reprint, New York: Anchor/Doubleday, 1981. Hall, Susan 1989 train-gone-sorry: The Etiquette of Social Conversations in American Sign Language. In: Wilcox, Sherman (ed.), American Deaf Culture. An Anthology. Burtonsville, MD: Linstok Press, 89⫺102. Harris, Margaret/Mohay, Heather 1997 Learning to Look in the Right Place: A Comparison of Attentional Behaviour in Deaf Children with Deaf and Hearing Mothers. In: Journal of Deaf Studies and Deaf Education 2, 95⫺103. Hendriks, Bernadet 2008 Jordanian Sign Language: Aspects of Grammar from a Cross-linguistic Perspective. PhD Dissertation, University of Amsterdam. Utrecht: LOT. Herrmann, Annika 2007 The Expression of Modal Meaning in German Sign Language and Irish Sign Language. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation. Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 245⫺271. Hoza, Jack 2007 It’s Not What You Sign, It’s How You Sign It: Politeness in American Sign Language. Washington, DC: Gallaudet University Press. Jansma, Sonja/Keppels, Inge 1993 The Effect of Immediately Preceding Input on the Language Production of Deaf Children of Hearing Parents. MA Thesis, University of Amsterdam. Johnston, Trevor/Schembri, Adam 2007 Australian Sign Language. An Introduction to Sign Language Linguistics. Cambridge: Cambridge University Press. Klima, Edward S./Bellugi, Ursula 1979 The Signs of Language. Cambridge, MA: Harvard University Press.
509
510
IV. Semantics and pragmatics Lillo-Martin, Diane 1995 The Point of View Predicate in American Sign Language. In Emmorey, Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 155⫺170. Mather, Susan 1987 Eye Gaze and Communication in a Deaf Classroom. In: Sign Language Studies 54, 11⫺30. Mather, Susan 1989 Visually Oriented Teaching Strategies with Deaf Preschool Children. In: Lucas, Ceil (ed.), The Sociolinguistics of the Deaf Community. New York: Academic Press, 165⫺ 187. Mather, Susan/Rodriguez-Fraticelli, Yolanda/Andrews, Jean F./ Rodriguez, Juanita 2006 Establishing and Maintaining Sight Triangles: Conversations Between Deaf Parents and Hearing Toddlers in Puerto Rico. In Lucas, Ceil (ed.), Multilingualism and Sign Languages. Washington, DC: Gallaudet University Press, 159⫺187. McKee, Rachel L./Wallingford, Sophia 2011 ‘So, Well, Whatever’: Discourse Functions of Palm-up in New Zealand Sign Language. In; Sign Language & Linguistics 14(2), 213⫺247. Metzger, Melanie/Bahan, Ben 2001 Discourse Analysis. In: Lucas, Ceil (ed.), The Sociolinguistics of Sign Languages. Cambridge: Cambridge University Press, 112⫺144. Metzger, Melanie 1995 Constructed Dialogue and Constructed Action in American Sign Language. In: Lucas, Ceil (ed.), Sociolinguistics of Deaf Communities. Washington, DC: Gallaudet University Press, 255⫺271. Metzger, Melanie/Fleetwood, Earl/Collins, Steven D. 2004 Discourse Genre and Linguistic Mode: Interpreter Influences in Visual and Tactile Interpreted Interaction. In: Sign Language Studies 4(2), 118⫺136. Mindess, Anna 2006 Reading Between the Signs. Intercultural Communication for Sign Language Interpreters (2nd edition). Yarmouth, MN: Intercultural Press. Morgan, Gary 2000 Discourse Cohesion in Sign and Speech. In: International Journal of Bilingualism 4(3), 279⫺300. Morgan, Gary 2002 Children’s Encoding of Simultaneity in British Sign Language Narratives. In: Sign Language & Linguistics 5(2), 131⫺165. Morgan, Gary 2006 The Development of Narrative Skills in British Sign Language. In: Schick, Brenda/ Marschark, Marc/Spencer, Patricia (eds.), Advances in Sign Language Development in Deaf Children. Oxford: Oxford University Press, 314⫺343. Morris, Huw/Thacker, Alice/Newman, Peter/Lees, Andrew 2000 Sign Language Tics in a Pre-lingually Deaf Man. In: Movement Disorders 15(2), 318⫺ 320. Nonhebel, Annika 2002 Indirecte Taalhandelingen in Nederlandse Gebarentaal. Een Kwalitatieve Studie naar de Non-manuele Markering van Indirecte Verzoeken [Indirect Speech Acts in NGT: a Qualitative Study of the Non-manual Marking of Indirect Requests]. MA Thesis, University of Amsterdam. Pfau, Roland/Steinbach, Markus 2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign Languages. In: Linguistics in Potsdam 24, 3⫺98.
22. Communicative interaction Pietrosemoli, Lourdes 1994 Sign Terminology for Sex and Death in Venezuelan Deaf and Hearing Cultures: A Preliminary Study of Pragmatic Interference. In: Erting, Carol J./Johnson, Robert C./ Smith, Dorothy L./Snider, Bruce D. (eds.), The Deaf Way: Perspectives from the International Conference on Deaf Culture. Washington, DC: Gallaudet University Press, 677⫺683. Pietrosemoli, Lourdes 2001 Politeness and Venezuelan Sign Language. In: Dively, Valerie/Metzger, Melanie/Taub, Sarah/Baer, Anne Marie (eds.), Signed Languages: Discoveries from International Research. Washington, DC: Gallaudet University Press, 163⫺179. Prinz, Philip M./Prinz, Elizabeth A. 1985 If Only You Could Hear What I See: Discourse Development in Sign Language. In: Discourse Processes 8, 1⫺19. Pyers, Jenny 2006 Indicating the Body: Expression of Body Part Terminology in American Sign Language. In: Language Sciences 28, 280⫺303. Richmond-Welty, E. Daylene/Siple, Patricia 1999 Differentiating the Use of Gaze in Bilingual-bimodal Language Acquisition: A Comparison of Two Sets of Twins with Deaf Parents. In: Journal of Child Language 26, 321⫺388. Roush, Daniel 1999 Indirectness Strategies in American Sign Language. MA Thesis, Gallaudet University. Roy, Cynthia B. 1989 Features of Discourse in an American Sign Language Lecture. In: Lucas, Ceil (ed.), Sociolinguistics of the Deaf Community. San Diego: Academic Press, 231⫺251. Russo, Tommaso 2004 Iconicity and Productivity in Sign Language Discourse: An Analysis of Three LIS Discourse Registers. In: Sign Language Studies 4(2), 164⫺197. Rutherford, Susan 1985 The Traditional Group Narrative of Deaf Children. In: Sign Language Studies 47, 141⫺159. Sacks, Harvey/Schegloff, Emanuel A./Jefferson, Gail 1974 A Simplest Systematics for the Organization of Turn-taking for Conversation. In: Language 50, 696⫺735. Schegloff, Emanuel/Jefferson, Gail/Sacks, Harvey 1977 The Preference for Self-correction in the Organization of Repair in Conversation. In: Language 53, 361⫺382. Schermer, Trude/Koolhof, Corline/Harder, Rita/de Nobel, Esther (eds.) 1991 De Nederlandse Gebarentaal. Twello: Van Tricht. Searle, John 1969 Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge University Press. Smith, Sandra/Sutton-Spence, Rachel 2005 Adult-child Interaction in a BSL Nursery ⫺ Getting Their Attention! In: Sign Language & Linguistics 8(1/2), 131⫺152. Spencer, Patricia 2000 Looking Without Listening: Is Audition a Prerequisite for Normal Development of Visual Attention During Infancy? In: Journal of Deaf Studies and Deaf Education 5(4), 291⫺302. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge University Press.
511
512
IV. Semantics and pragmatics Swisher, M. Virginia 1992 The Role of Parents in Developing Visual Turn-taking in Their Young Deaf Children. In: American Annals of the Deaf 137, 92⫺100. Van Herreweghe, Mieke 2002 Turn-taking Mechanisms and Active Participation in Meetings with Deaf and Hearing Participants in Flanders. In: Lucas, Ceil (ed.), Turntaking, Fingerspelling, and Contact in Signed Languages. Washington, DC: Gallaudet University Press, 73⫺106. Winston, Elizabeth A. 1991 Spatial Referencing and Cohesion in an ASL Text. In: Sign Language Studies 73, 397⫺410. Young, Alys/Ackermann, Jennifer/Kyle, Jim 2000 On Creating a Workable Signing Environment ⫺ Deaf and Hearing Perspectives. In: Journal of Deaf Studies and Deaf Education 5(2), 186⫺195.
Anne Baker, Amsterdam (The Netherlands) Beppie van den Bogaerde, Utrecht (The Netherlands)
V. Communication in the visual modality 23. Manual communication systems: evolution and variation 1. 2. 3. 4. 5. 6. 7.
Introduction The origin of sign languages Sign language types and sign language typology Tactile sign languages Secondary sign languages Conclusion Literature
Abstract This chapter addresses issues in the evolution and typology of manual communication systems. From a language evolution point of view, sign languages are interesting because it has been suggested that oral language may have evolved from gestural (proto)language. As far as typology is concerned, two issues will be addressed. On the one hand, different types of manual communication systems, ranging from simple gestural codes to complex natural sign languages, will be introduced. The use and structure of two types of systems ⫺ tactile sign languages and secondary sign languages ⫺ will be explored in more detail. On the other hand, an effort will be made to situate natural sign languages within typological classifications originally proposed for spoken languages. This approach will allow us to uncover interesting inter-modal and intra-modal typological differences and similarities.
1. Introduction Throughout this handbook, when authors speak of ‘sign language’, they usually refer to fully-fledged natural languages with complex grammatical structures which are the major means of communication of many (but not all) prelingually deaf people. In the present chapter, however, ‘sign language’ is sometimes understood more broadly and also covers manual communication systems that do not display all of the features usually attributed to natural languages (such as, for example, context-independence and duality of patterning). In addition, however, labels such as ‘gestural code’ or ‘sign system’ will also be used in order to make a qualitative distinction between different types of systems. This chapter addresses issues in the emergence and typology of manual communication systems, including but not limited to natural sign languages. The central theme connecting the sections is the question of how such systems evolve, as general means
514
V. Communication in the visual modality of communication but also in more specialized contexts, and how the various systems differ from each other with respect to expressivity and complexity. The focus will be on systems that are the primary means of communication in a certain context ⫺ no matter how limited they are. Co-speech gesture is thus excluded from the discussion, but is dealt with in detail in chapter 27. In section 2, we will start our investigation with a discussion of hypotheses concerning the origin of (sign) languages, in particular, the gestural theory of language origin. In section 3, we present an overview of different types of manual communication systems ⫺ from gestural codes to natural sign languages ⫺ and we sketch how sign language research relates to linguistic typology. In particular, we will address selected topics in intra- and inter-modal typological variation. In the next two sections, the focus will be on specific types of sign languages, namely the tactile sign languages used in communication with deafblind people (section 4) and sign languages which, for various reasons, are developed and used within hearing groups or communities, the so-called ‘secondary sign languages’ (section 5).
2. The origin of sign languages The origin and evolution of language is currently a hotly debated issue in evolutionary biology as well as in linguistics. Sign languages are interesting in this context because some scholars argue that manual communication may have preceded vocal communication. Since language does not fossilize, all the available evidence for evolutionary scenarios is indirect and comes from diverse sources including fossil evidence, cultural artifacts (such as Acheulean hand-axes), birdsong, and co-speech gesture. In the following, I will first present a brief sketch of what we (think we) know about language evolution (section 2.1) before turning to the gestural theory of language origin (section 2.2).
2.1. The evolution of language According to Fitch (2005, 2010), three components have been identified as crucial for the human language faculty: speech (that is, the signal, be it spoken or signed), syntax, or grammar (that is, the combinatorial rules of language), and semantics (that is, our ability to convey an unlimited range of meanings). Human speech production involves two key factors, namely our unusual vocal tract and vocal imitation. The descended larynx of humans enables them to produce a greater diversity of formant frequency patterns. While this anatomical change is certainly an important factor, recent studies indicate that “selective forces other than speech might easily have driven laryngeal descent at one stage of our evolution” (Fitch 2005, 199). Since other species with a permanently descended larynx have been discovered (e.g. lions), it is likely that the selective force is the ability to produce impressive vocalizations (the ‘size exaggeration hypothesis’; also see Fitch 2002). Still, it is clear that early hominids were incapable of producing the full range of speech sounds (Lieberman 1984; Fitch 2010).
23. Manual communication systems: evolution and variation Imitation is a prerequisite for language learning and communication. Interestingly, while non-human primates are highly constrained when it comes to imitation, other species, like birds and dolphins, are very good at imitating vocalizations. Birdsong in particular has attracted the attention of scholars because it shows interesting parallels with speech (Marler 1997; Doupe/Kuhl 1999). First, most songbirds learn their speciesspecific songs by listening to other members of their species. Second, they pass through a critical period in development; acquisition after the critical period results in defective songs. Third, at least some birdsong displays syntactic structure in that smaller units are combined to form larger units (Okanoya 2002). In contrast to human language, however, birdsong is devoid of compositional meaning. Based on these parallels, it has been suggested (for instance, by Darwin) that the earliest stage of language evolution may have been musical. Fitch (2005, 220) refers to this stage as ‘prosodic protolanguage’, that is, a language which is characterized by complex, learned vocalization but lacks compositional meaning. Presumably, the evolution of this protolanguage was driven by sexual selection (Okanoya 2002). At a later stage, communicative needs may have motivated the addition of semantics. “By this hypothesis, music is essentially a behavioral ‘fossil’ of an earlier human communication system” (Fitch 2005, 221; also see Fitch 2006). While the above scenario could be paraphrased as ‘syntax without semantics’, an alternative scenario suggests that early stages of language were characterized by ‘semantics without syntax’; this is referred to as ‘asyntactic protolanguage’. According to this hypothesis, protolanguage consisted of utterances of only a single word, or simple concatenations of words, without phrase structure (Jackendoff 1999; Bickerton 2003). Jackendoff (1999, 273) suggests that single-word utterances associated with high affect, such as wow!, ouch!, and dammit! are “‘fossils’ of the one-word stage of language evolution ⫺ single-word utterances that for some reason are not integrated into the larger combinatorial system”. Jackendoff further assumes that the first vocal symbols were holistic gestalts (pretty much like primate calls) and that a phonological system evolved when the repertoire of symbols (the lexicon) increased. Since a larger lexicon requires more phonological distinctions, one may speculate that the evolution of the vocal tract (the descended larynx) was “driven by the adaptivity of a larger vocabulary, through more rapid articulation and enhanced comprehensibility” (Jackendoff 1999, 274). A third evolutionary scenario, which assumes a ‘gestural protolanguage’, will be addressed in the following section. Before concluding this section, however, I want to point out that the recent isolation of a language-related gene, called Forkhead-box P2 (or FOXP2), has caused considerable excitement among linguists and evolutionary biologists (Vargha-Khadem et al. 1995). It has been found that the human version of FOXP2 is functionally identical in all populations worldwide, but differs significantly from that of chimpanzees. Statistical analysis of the relevant changes suggests that these changes occurred not more than 200,000 years ago in human phylogeny (see Fitch (2005, 2010) for details).
2.2. The gestural theory of language origin I shall now describe one scenario, the gestural theory of language origin, in more detail because it emphasizes the crucial role of manual communication in the evolution of
515
516
V. Communication in the visual modality language (Hewes 1973, 1978; Armstrong/Wilcox 2003, 2007; Corballis 2003). According to this theory, protolanguage was gestural, that is, composed of manual and facial gestures. The idea that language might have evolved from gestures is not a new one; actually, it has been around since the French Enlightenment of the 18th century, if not longer (Armstrong/Wilcox 2003). The gestural hypothesis is consistent with the existence of co-speech gesture (see chapter 27), which thus could be interpreted as a remnant of gestural protolanguage, and with the fact that sign languages are fully-fledged, natural languages. Further support comes from the observation that apes are considerably better at learning signs than speech (Gardner/Gardner/van Cantfort 1989). As for anatomical developments, it has been established that bipedalism and enlargement of the brain are the defining anatomical traits of the hominid lineage (which separated from the lineage leading to chimpanzees approximately 5⫺6 million years ago). Once our ancestors became bipedal, the hands were available for tool use and gestural communication. Fossil evidence also indicates that about three million years ago, “the human hand had begun to move toward its modern configuration” while “the brain had not yet begun to enlarge, and the base of the skull, indicative of the conformation of the vocal tract, had not begun to change toward its modern, speech-enabling shape” (Armstrong/Wilcox 2003, 307). In other words: it seems likely that manual communication was possible before vocal communication, and assuming that there was a desire or need for an efficient exchange of information, gestural communication may have evolved. Gradually, following a phase of co-occurrence, vocal gestures must have replaced manual gestures. However, given the existence of sign languages, the obvious question is why this change should have occurred in the first place. Undoubtedly, speech is more useful when interlocutors cannot see each other and while holding tools; also, it “facilitated pedagogy through the simultaneous deployment of demonstration and verbal description” (Corballis 2010, 5). Some scholars, however, doubt that these pressures would have been powerful enough to motivate a change from manual to vocal communication and thus criticize the gestural hypothesis (MacNeilage 2008). In the 1990s, the gestural theory was boosted when mirror neurons (MNs) were discovered in the frontal cortex of non-human primates (Rizzolatti/Arbib 1998). MNs are activated both when the monkey performs a manual action and when it sees another monkey perform the same action. According to Fitch (2005, 220), this discovery is exciting for three reasons. First, MNs have “the computational properties that would be required for a visuo-manual imitation system”, and, as mentioned above, imitation skills are crucial in language learning. Second, MNs have been claimed to support the gestural theory because they respond to manual action (Corballis 2003). Third, and most importantly, MNs are located in an area of the primate brain that is analogous to Broca’s area in humans, which is known to play a central role in both language production and comprehension. The fact that (part of) Broca’s area is not only involved in speech but also in motor functions such as complex hand movements (Corballis 2010) lends further support to an evolutionary link between gestural and vocal communication (also see Arbib 2005). Clearly, when it comes to the evolution of cognition in general, and the evolution of language in particular, one should “not confuse plausible stories with demonstrated truth” (Lewontin 1998, 129). Given the speculative nature of many of the issues addressed above, it seems impossible to prove that the gestural theory of language origin
23. Manual communication systems: evolution and variation is correct. According to Corballis (2010, 5), the gestural theory thus “best serves as a working hypothesis to guide research into the nature of language, and the genetic and evolutionary changes that gave rise to our species” ⫺ a statement that might as well be applied to the other evolutionary scenarios.
3. Sign language types and sign language typology 3.1. Sign language types In this section, I will provide a non-exhaustive typology of manual communication systems (to use a fairly neutral term), proceeding from simple context-bound gestural codes to complex natural sign languages. We will see that more complex systems may evolve from simpler ones ⫺ a development which, to some extent, might mirror processes which presumably also played a role in the evolution of (sign) language. First, there are gestural communication systems and technical manual codes used, for instance, over distances that preclude oral communication (such as the crane driver guider gestures described by Kendon (2004, 292 f.)), under water (manual symbols used by scuba divers), or in situations which require silence (for instance, manual communication during hunting; see e.g. Lewis (2009)). Clearly, all of these manual codes are only useful in very specific contexts. Still, the existence of hunting codes in particular is interesting in the present context because it has been argued that at least some sign languages may have developed from manual codes used during hunting (Divale/Zipin 1977; Hewes 1978). Crucially, none of these gestural communication systems is used by deaf people. This is a feature they share with ‘secondary sign languages’, sign languages which, for various reasons, were developed and used by hearing people. The manual communication systems commonly subsumed under the label ‘secondary sign language’ (e.g. sign languages used by Australian Aboriginals or monks) show varying degrees of lexical and grammatical complexity, but all of them appear to be considerably more elaborate than the manual codes mentioned above. Aspects of the use and structure of secondary sign languages will be discussed in detail in section 5. So-called ‘homesign’ systems are also used in highly restricted contexts, these contexts, however, not being situational in nature (e.g. diving, hunting) but rather being family contexts. Prelingually deaf children growing up in hearing families without sign language input may develop gestural communication systems to interact with their parents and siblings. Within a family, such systems may be quite effective means of communication, but typically, they are used for only one generation and are not transmitted beyond the family. While at first sight, a homesign system may appear to be a fairly simple conglomerate of mostly iconic gestures, research has shown that these gestures are discrete units and that there is evidence of morphological and syntactic structure (e.g. predicate frames, recursion) in at least some homesign systems (GoldinMeadow 2003; see chapter 26 for extensive discussion). Homesign systems are known to have the potential to develop further into fully-fledged sign languages, once homesigners get in contact with each other, for example, at a boarding school ⫺ as has been
517
518
V. Communication in the visual modality documented, for instance, for Nicaraguan Sign Language (Kegl/Senghas/Coppola 1999; see chapter 36, Language Emergence and Creolisation, for discussion). Moving further from less complex systems towards ‘true’ sign languages, we find various types of manual communication systems that combine the lexicon of a sign language with structural elements of the surrounding spoken language. Such systems ⫺ for instance, Manually-Coded English (MCE) in the United States and Nederlands met Gebaren (Sign-supported Dutch) in the Netherlands ⫺ are commonly used in educational settings or, more generally, when Deaf signers interact with hearing second language learners of a sign language. Even within this class of systems, however, a considerable amount of structural variation exists (also see Crystal/Craig (1978), who refer to such systems as ‘contrived sign languages’). Some systems mirror the structure of a spoken language to the extent that functional morphemes are represented by dedicated signs or fingerspelling (e.g. the copula verb be or bound morphemes like -ing and third person singular -s in English-based systems). Other systems are closer to a particular sign language in that many of the grammatical mechanisms characteristic of the sign language are preserved (e.g. use of space, non-manual marking), but signs are ordered according to the rules of the spoken language (for MCE, see Schick (2003); also see chapter 35, Language Contact and Borrowing). Turning finally to natural sign languages, further classifications have been proposed (Zeshan 2008). To some extent, these classifications reflect developments in the field of sign language linguistics (Perniss/Pfau/Steinbach 2007; also see chapter 38). In the 1960s and 1970s, linguistic research on sign languages started with descriptions of a number of western sign languages, such as American Sign Language (ASL), Sign Language of the Netherlands (NGT), and Swedish Sign Language (SSL). Apart from a few exceptions, it was only from the 1990s onwards that these descriptions were complemented by studies focusing on non-western sign languages, e.g. Brazilian Sign Language (LSB), Indopakistani Sign Language (IPSL), and Japanese Sign Language (NS). More recently, the so-called ‘village sign languages’, that is, sign languages used in village communities with a high incidence of genetic deafness, have entered the stage of sign language linguistics (see chapter 24 for discussion). In Figure 23.1 different types of manual communication systems are arranged along a continuum of complexity and possible developmental paths from one system to an-
Fig. 23.1: Types of manual communication systems; the arrows indicate possible developments of one system into another
Fig. 23.2: The mosaic of sign language data (adapted from Zeshan 2008, 675)
23. Manual communication systems: evolution and variation other are pointed out. Focusing on the rightmost box in Figure 23.1, the natural sign languages, Zeshan (2008, 675) presents different subtypes in a ‘mosaic of sign language data’, an adapted version of which is presented in Figure 23.2. In this mosaic, western and non-western sign languages are both classified as ‘urban sign languages’, contrasting them with village sign languages. Note that Zeshan also hypothesizes that further sign language types may yet have to be discovered (the ‘?’-box in Figure 23.2). Taken together, the discussion in this section shows that manual communication systems differ from each other with respect to (at least) the following parameters: (i) complexity and expressivity of the system; (ii) type and size of community (or group) in which the system is used; and (iii) influence of surrounding spoken language on the system (see Crystal/Craig (1978, 159) for a classificatory matrix of different types of manual communication systems (‘signing behaviors’), ranging from cricket signs via symbolic dancing to ASL).
3.2. Sign languages and linguistic typology Having introduced different types of sign systems and sign languages, I will now zoom in on natural sign languages in order to address some of the attested inter-modal and intra-modal typological patterns and distinctions. Two questions will guide our discussion: (i) in how far can typological classifications that have been proposed on the basis of spoken languages be applied to sign languages, and (ii) to what extent do sign languages differ from each other typologically? Obviously, developments within the field of sign language typology have gone hand in hand with the increased number of sign languages being subject to linguistic investigation. Given that many typologically relevant aspects are discussed extensively in sections II and III of this handbook, I will only provide a brief overview of some of the phenomena that have been investigated from a typological perspective; I refer the reader to the relevant chapters for examples and additional references. I will focus on morphological typology, word order, negation, and agreement (also see Schuit/Baker/Pfau 2011; Slobin accepted).
3.2.1. Morphological typology Spoken languages are commonly classified based on their morphological typology, that is, the amount of (linear) affixation and fusion. A language with only monomorphemic words is of the isolating type, while a language which allows for polymorphemic words is synthetic (or polysynthetic if it also features noun incorporation). A synthetic language in which morphemes are easily segmented is agglutinative; if segmentation is impossible, it is called fusional (Comrie 1989). Signs are known to be of considerable morphological complexity (Aronoff/Meir/ Sandler 2005), but the fact that morphemes tend to be organized simultaneously rather than sequentially makes a typological classification less straightforward. Consider, for instance, the NGT verb give. In its base form, this verb is articulated with a u-hand and consists of a location-movement-location (L-M-L) sequence (movement away from the signer’s body). The verb can be modified such that it expresses a complex meaning like, for example, ‘You give me a big object with some effort’ by changing the hand-
519
520
V. Communication in the visual modality shape, the direction and manner of movement, as well as non-manual features. All of these changes happen simultaneously, such that the resulting sign is still of the form L-M-L; no sequential affixes are added. Simultaneity, however, is not to be confused with fusion; after all, all of the morphemes involved (viz. subject and object agreement, classifier, manner adverb) are easily segmented. It therefore appears that NGT is agglutinative (a modality-independent classification), but that morphemes are capable of combining simultaneously (a modality-specific feature). Surely, simultaneous morphology is also attested in spoken languages (e.g. tone languages) but usually, there is a maximum of two simultaneously combined morphemes. As for intra-modal typology, it appears that all sign languages investigated to date are of the same morphological type. Still, it is possible that they differ from each other in the amount of manual and non-manual morphological operations that can be applied to a stem (Schuit 2007).
3.2.2. Word order In the realm of syntax, word order (or, more precisely, constituent order) is probably the typological feature that has received most attention. For many spoken languages, a basic word order has been identified, where ‘basic’ is usually determined by criteria such as frequency, distribution, pragmatic neutrality, and morphological markedness (Dryer 2007). Typological surveys have revealed that by far the most common word orders are S(ubject)-O(bject)-V(erb) and SVO. In Dryer’s (2011) sample of 1377 languages, 565 are classified as SOV (41 %) and 488 (35 %) as SVO. The third most frequent basic word order is VSO, which is attested in 95 (7 %) of the languages in the sample. In other words: in 83 % of all languages, the subject precedes the verb, and in 79 % (including the very few OVS and VOS languages), the object and the verb are adjacent. However, it has been argued that not all languages exhibit a basic word order (Mithun 1992). According to Dryer, 189 languages in his sample (14 %) lack a dominant word order. Given that to date, word order has only been investigated for a small number of sign languages, it is impossible to draw firm conclusions. A couple of things, however, are worth noting. First, in all sign languages for which a basic word order has been identified, the order is either SOV (e.g. Italian Sign Language, LIS) or SVO (e.g. ASL). Second, for some sign languages, it has also been suggested that they lack a basic word order (Bouchard 1997). Third, it has been claimed that in some sign languages, word order is not determined by syntactic notions, but rather by pragmatic (information structure) notions, such as Topic-Comment. Taken together, we can conclude (i) that word order typology can usefully be applied to sign languages, and (ii) that sign languages differ from each other in their basic word order (see Kimmelman (2012) for a survey of factors that may influence word order; also see chapter 12 for discussion).
3.2.3. Negation In all sign languages studied to date, negation can be expressed manually (i.e. by a manual particle) and non-manually (i.e. by a head movement). Therefore, at first sight,
23. Manual communication systems: evolution and variation the expression of negation appears to be typologically highly homogenous. However, based on a typological survey, Zeshan (2004) proposes that sign language negation actually comes in two different types: manual dominant and non-manual dominant systems. The former type of system is characterized by the fact that the use of a manual negative particle is obligatory; such a system has been identified in, for example, Turkish Sign Language (TİD) and LIS. In contrast, in non-manual dominant sign languages, sentences are commonly negated by a non-manual marker only; this pattern is found, for instance, in NGT, ASL, and IPSL. Moreover, there are differences with respect to the non-manual marker. First, as far as the form of the marker is concerned, some sign languages (e.g. TİD) employ a backward head tilt, in addition to a negative headshake (which is the most common non-manual marker across all sign languages studied). Second, within the group of non-manual dominant sign languages, there appear to be sign language specific constraints concerning the scope of the non-manual marker (see chapter 15 for discussion). As for the typology of negation in spoken languages, an important distinction is that between particle negation (e.g. English) and morphological/affixal negation (e.g. Turkish). Moreover, in languages with split negation (e.g. French), two negative elements ⫺ be it two particles or a particle and an affix ⫺ are combined to negate a proposition (Payne 1985). According to Pfau (2008), this typology can be applied to sign languages. He argues that, for instance, German Sign Language (DGS), a nonmanual dominant sign language, has split negation, with the manual negator being a particle (which, however, is optional) and the non-manual marker, the headshake, being an affix which attaches to the verb. In contrast, LIS has simple particle negation; in this case, the particle may be lexically specified for a headshake. If this account is on the right track, then, as before, we find inter-modal typological similarities as well as intra-modal differences.
3.2.4. Agreement The sign language phenomenon that some scholars refer to as ‘agreement’ is particularly interesting from a cross-modal typological point of view because it is realized in the signing space by modulating phonological properties (movement and/or orientation) of verbs (see chapter 7 for extensive discussion; for a recent overview also see Lillo-Martin/Meier (2011)). We know from research on spoken languages that languages differ with respect to the ‘richness’ of their verbal agreement systems. At the one end of the continuum lie languages with a ‘rich’ system, where every person/number distinction is spelled out by a different morphological marker (e.g. Turkish); at the other end, we find languages in which agreement is never marked, that is, ‘zero’ agreement languages (e.g. Chinese). All languages that fall in between the two extremes could be classified as ‘poor’ agreement languages (e.g. English, Dutch). A further classification is based on the distinction between subject and object agreement. In spoken languages, object agreement is more marked than subject agreement, that is, all languages that have object agreement also have subject agreement, while the opposite is not the case. Finally, in a language with agreement ⫺ be it rich or poor ⫺ generally all verbs agree in the same way (Corbett 2006).
521
522
V. Communication in the visual modality All of these aspects appear to be different in sign languages. First, in all sign languages for which an agreement system has been described, only a subgroup of verbs (the so-called ‘agreeing’ verbs) can be modulated to show agreement (Padden 1988). Leaving theoretical controversies aside, one could argue that agreeing verbs mark every person/number distinction differently, that is, by dedicated points in space. In contrast, other verbs (‘plain verbs’) can never change their form to show agreement. Hence, in a sense, a rich and a zero agreement system are combined within a single sign language. Second, subject agreement has been found to be generally more marked than object agreement in that (i) some verbs only show object agreement and (ii) subject agreement is sometimes optional. In addition, while agreement markers for a certain person/number combination may differ significantly across spoken languages, all sign languages that mark agreement do so in a strikingly similar way. Still, we also find intra-modal variation. Some sign languages, for instance, do not display an agreement system of the type sketched above (e.g. Kata Kolok, a village sign language of Bali (Marsaja 2008)). In other sign languages, agreement can be realized by dedicated auxiliaries in the context of plain verbs (see chapter 10 for discussion). It thus seems that in the realm of agreement, wellknown typological classifications are only of limited use when it comes to sign languages (also see Slobin (accepted) for a typological perspective on sign language agreement). Space does not allow me to go into detail, but at least some of the patterns we observe are likely to result from specific properties of the visual modality, in particular, the use of signing space and the body of the signer (Meir et al. 2007).
3.2.5. Summary The above discussion makes clear that sign language typology is a worthwhile endeavor ⫺ both from an inter- and intra-modal perspective. One can only agree with Slobin (in press), who points out that “the formulation of typological generalizations and the search for language universals must be based […] on the full set of human languages ⫺ spoken and signed”. As for inter-modal variation, we have seen that certain (but not all) typological classifications are fruitfully applied to sign languages. Beyond the aspects addressed above, this has also been argued for the typology of relative clauses: just like spoken languages, sign languages may employ head-internal or head-external relative clauses (see chapter 16, Complex Sentences, for discussion). Slobin (accepted) discusses additional typological parameters such as locus of marking (head- vs. dependent marking), framing (verb- vs. satellite-framed), and subject vs. topic-prominence, among others, and concludes that all sign languages are head-marking, verb-framed, and topic-prominent, that is, that there is no intra-modal variation in these areas. This brings us back to the question whether sign languages ⫺ certain typological differences notwithstanding ⫺ are indeed typologically more similar than spoken languages and in how far the modality determines these similarities ⫺ a question that I will not attempt to answer here (see chapter 25, Language and Modality, for further discussion). Obviously, recurring typological patterns might also be due to genetic relationships between sign languages (see chapter 38) or reflect the influence of certain areal features also attested in surrounding spoken languages (e.g. use of question particles in
23. Manual communication systems: evolution and variation East Asian sign languages). In addition, socio-demographic factors such as type of community (community size and number of second language learners) and size of geographical area in which a language is used have also been argued to have an influence on certain grammatical properties of a language (Kuster 2003; Lupyan/Dale 2010). This latter factor might, for instance, result in a certain degree of typological homogeneity among village sign languages. At present, however, only little is known about the impact of such additional factors on sign language typology.
4. Tactile sign languages Sign languages are visual languages and therefore, successful signed communication crucially relies on visual contact between the interlocutors (as pointed out in section 2.2, this constraint may have contributed to the emergence of spoken languages). As a consequence, sign language is not an accessible means of communication for people who are deaf and blind. Tactile sign languages are an attempt to overcome this obstacle by shifting the perception of the language from the visual to the haptic channel. Obviously, this shift requires certain accommodations. In this section, I will first say a few words about the etiology of deafblindness before turning to characteristic features of tactile sign languages.
4.1. Deafblindness ‘Deafblindness’ is a cover term which describes the condition of people who suffer from varying degrees of visual and hearing impairment. It is important to realize that the term does not necessarily imply complete deafness and blindness; rather, deafblind subjects may have residual hearing and/or vision. Still, all deafblind have in common that their combined impairments impede access to visual and acoustic information to the extent that signed or spoken communication is no longer possible. Deafblindness (DB) may have various etiologies. First, we have to distinguish congenital DB from acquired DB. Congenital DB may be a symptom associated with congenital rubella (German measles) syndrome, which is caused by a viral infection of the mother during the first months of pregnancy. Congenital DB rarely occurs in isolation; it usually co-occurs with other symptoms such as low birth weight, failure to thrive, and heart problems. The most common cause for acquired DB appears to be one of the various forms of Usher syndrome, an autosomal recessive genetic disorder. All subjects with Usher syndrome suffer from retinitis pigmentosa, a degenerative eye disease which affects the retina and leads to progressive reduction of the visual field (tunnel vision), sometimes resulting in total blindness. Usher type 1 is characterized by congenital deafness while subjects with Usher type 2 are born hard-of-hearing. Occasionally, in the latter type, hearing loss is progressive. In addition, DB may result from hearing and/or visual impairments associated with ageing ⫺ actually, this is probably the most common cause for DB. Three patterns have to be distinguished: (i) a congenitally deaf person suffers from progressive visual impairment; (ii) a congenitally blind person suffers from progressive hearing loss; or (iii) a person born with normal
523
524
V. Communication in the visual modality
Fig. 23.3: The Lorm alphabet (palm of left hand shown)
hearing and vision experiences a combination of both deteriorations (Aitken 2000; Balder et al. 2000). Depending on the onset and etiology of DB, a deafblind individual may choose different communication methods. Some of these methods are related to spoken language, or rather writing, in that they employ haptic representations of letters. Letters may, for instance, be drawn in the palm of the deafblind receiver. A faster method is the so-called Lorm alphabet (after Hieronymus Lorm (1821⫺1902), who, deafblind himself, developed the system in 1881), which assigns letters to locations on the fingers or the palm. Some letters are represented by a point (e.g. ‘E’ ⫺ touch top of receiver’s ring finger), others by lines (e.g. ‘D’ ⫺ brush along receiver’s middle finger from top towards palm); see Figure 23.3. Other communicative strategies are based on sign language; these strategies are more commonly used by individuals who are born with unimpaired vision but are congenitally or prelingually deaf and have acquired sign language at an early age. People with Usher syndrome, for instance, who can still see but suffer from tunnel vision may profit from signed input when use is made of a reduced signing space in front of the face. Once the visual field is further reduced or has disappeared completely, a subject may switch to tactile sign language, a mode of communication that will be elaborated on in the next section.
4.2. Characteristics of tactile communication Generally, tactile sign languages are based on existing natural sign languages which, however, have to be adapted in certain ways to meet the specific needs of deafblind people. To date, characteristics of tactile communication have been explored for tactile ASL (Reed et al. 1995; Collins/Petronio 1998; Quinto-Pozos 2002), tactile SSL (Mesch 2001), tactile NGT (Balder et al. 2000), tactile French Sign Language (Schwartz 2009), and tactile Italian Sign Language (Cecchetto et al. 2010). Conversations between deafblind people are limited to two participants. Fourhanded interactions have to be distinguished from two-handed interactions. In the former, the conversational partners are located opposite each other and the receiver’s hands are either both on top of the signer’s hands (monologue position; see Fig-
23. Manual communication systems: evolution and variation
Fig. 23.4: Positioning of hands in tactile sign language; the person on the right is the receiver (source: http://www.flickr.com)
ure 23.4) or are in different positions, one under and one on top of the signer’s hands (dialogue position; Mesch 2001). In two-handed interactions, the signer and the receiver are located next to each other. In this setting, the receiver is usually more passive (e.g. when receiving information from an interpreter). In both settings, the physical proximity of signer and receiver usually results in a reduced signing space. In the following subsections, we will consider additional accommodations at various linguistic levels that tactile communication requires.
4.2.1. Phonology As far as the phonology of signs is concerned, Collins and Petronio (1998) observe that handshapes were not altered in tactile ASL, despite the fact that some handshapes are difficult to perceive (e.g. ASL number handshapes in which the thumb makes contact with one of the other fingers). Due to the use of a smaller signing space, the movement paths of signs were generally shorter than in visual ASL. Moreover, the reduced signing space was also found to affect the location parameter; in particular, signs without body contact tend to be displaced towards the center of the signing space. Balder et al. (2000) describe how in NGT, signs that are usually articulated in the signing space (e.g. walk) are sometimes articulated on the receiver’s hand. In signs with body contact, Collins and Petronio (1998) observe an interesting adaptation: in order to make the interaction more comfortable, the signer would often move the respective body part towards the signing hand, instead of just moving the hand towards the body part to make contact. Finally, adaptations in orientation may result from the fact that the receiver’s hand rests on top of the signer’s hand. Occasionally, maintaining the correct orientation would require the receiver’s wrist to twist awkwardly. Collins and Petronio do not consider non-manual components such as mouthings and mouth gestures. Clearly, such components are not accessible to the deafblind receiver. Balder et al. (2000) find that in minimal pairs that are only distinguished by mouthing (such as the NGT signs brother and sister), one of the two would undergo a handshape change: brother is signed with a u-hand instead of a W-hand.
525
526
V. Communication in the visual modality
4.2.2. Morphology Non-manuals also play a crucial role in morphology because adjectival and adverbial modifications are commonly expressed by non-manual configurations of the lower face (Liddell 1980; Wilbur 2000). The data collected by Collins and Petronio (1998) suggest that non-manual morphemes are compensated for by subtle differences in the sign’s manual articulation. For instance, instead of using the non-manual adverbial “mm”, which expresses relaxed manner, a verbal sign (e.g. drive) can be signed more slowly and with less muscle tension (also see Collins 2004). For NGT, Balder et al. (2000) also observe that manual signs may replace non-manual modifiers; for example, the manual sign very-much may take over the function of an intensifying facial expression accompanying the sign angry to express the meaning ‘very angry’.
4.2.3. Syntax Interesting adaptations are also attested in the domain of syntax, and again, for the most part, these adaptations are required to compensate for non-manual markers. Mesch (2001) presents a detailed analysis of interrogative marking in tactile SSL. Obviously, yes/no-questions pose a bigger challenge in tactile conversation since wh-questions usually contain a wh-sign which is sufficient to signal the interrogative status of the utterance. Almost half of the yes/no-questions from Mesch’s corpus are marked by an extended duration of the final sign. Mesch points out, however, that such a sentencefinal hold also functions more generally as a turn change signal; it can thus not be considered an unambiguous question marker. In addition, she reports an increased use of pointing to the addressee (indexadr) in the data; for the most part, this index occurs sentence-finally, but it may also appear initially, in second position, and it may be doubled, as in (1a). In this example, the final index is additionally marked by an extended duration of 0.5 seconds (Mesch 2001, 148). (1)
a. indexadr interested fish reel-in indexadr-dur(0.5) ‘Are you interested in going fishing?’ b. indexadr what plane what ‘What kind of a plane was it?’
[Tactile SSL] [Tactile ASL]
Other potential manual markers such as an interrogative (palm up) gesture or drawing of a question mark after the utterance were uncommon in Mesch’s data. In contrast, yes/no-questions are commonly ended with a general question sign in tactile NGT and tactile ASL (Balder et al. 2000; Collins/Petronio 1998). Moreover, Collins and Petronio report that in their data, many wh-questions also involve an initial index towards the receiver. Note that in the tactile ASL example in (1b), the index is neither subject nor object of the question (adapted from Collins/Petronio (1998, 30)); rather, it appears to alert the receiver that a question is directed to him. None of the above-mentioned studies considers negation in detail. While the negative polarity of an utterance is commonly signaled by a negative headshake only in the sign languages under investigation, it seems likely that in their tactile counterparts, the
23. Manual communication systems: evolution and variation use of manual negative signs is required (see Frankel (2002) for the use of tactually accessible negation strategies in deafblind interpreting). In a study on the use of pointing signs in re-told narratives of two users of tactile ASL, Quinto-Pozos (2002) observes a striking lack of deictic pointing signs used for referencing purposes, i.e. for establishing or indicating a pre-established arbitrary location in signing space, which is linked to a non-present human, object, or locative referent. Both deafblind subjects only used pointing signs towards the recipient of the narrative (2nd person singular). In order to indicate other animate or inanimate referents, one subject made frequent use of fingerspelling while the other used nominal signs (e.g. girl, mother) or a sign (glossed as she) which likely originated from Signed English. Quinto-Pozos hypothesizes that the lack of pointing signs might be due to the non-availability of eye gaze, which is known to function as an important referencing device in visual ASL. The absence of eye gaze in tactile ASL “presumably influences the forms that referencing strategies take in that modality” (Quinto-Pozos 2002, 460). Also, at least in the narratives, deictic points towards third person characters have the potential to be ambiguous. Quinto-Pozos points out that deafblind subjects probably use pointing signs more frequently when referring to the location of people or objects in the immediate environment.
4.2.4. Discourse As far as discourse organization is concerned, most of the available studies report that tactile sign languages employ manual markers for back-channeling and turn-taking instead of non-manual signals such as head nods and eye gaze (Baker 1977). Without going into much detail, manual feedback markers include signs like oh-i-see (nodding d-hand), different types of finger taps that convey meanings such as “I understand” or “I agree”, squeezes of the signer’s hand, and repetition of signs by the receiver (Collins/Petronio 1998; Mesch 2001). Turn-taking signals on the side of the signer include a decrease in signing speed and lowering of the hands (see Mesch (2001, 82 ff.) for a distinction of different conversation levels in tactile SSL). Conversely, if the receiver wants to take over the turn, he may raise his hands, lean forward, and/or pull the passive hand of the signer slightly (Balder et al. 2000; Schwartz 2009). In addition, deafblind people who interact on a regular basis may agree on certain “code signs” which facilitate the communication. A code sign may signal, for instance, that someone is temporarily leaving the room or it may indicate an emergency. For tactile NGT, Balder et al. (2000) mention the possibility of introducing a sentence by the signs tease or haha to inform the receiver that the following statement is not meant seriously, that is, to mark the pragmatic status of the utterance.
4.2.5. Summary Taken together, the accommodations sketched above allow experienced deafblind signers to converse fluently in a tactile sign language. Thanks to the establishment of national associations for the deafblind, contact between deafblind people is increasing, possibly leading to the emergence of a Deafblind culture, distinct from, but embedded
527
528
V. Communication in the visual modality within, Deaf culture (MacDonald 1994). It is to be expected that an increase in communicative interaction will lead to further adaptations and refinements of the source sign language to meet the specific needs of deafblind users.
5. Secondary sign languages In contrast to the sign languages discussed in the previous sections, secondary sign languages (sometimes also referred to as ‘alternate sign languages’) do not result from the specific communicative needs of deaf or deafblind people. Rather, they are developed in hearing societies in which they are used as a substitute for spoken language in certain situations. Amongst the motivations for the development of a secondary sign language are religious customs and the need for a mode of communication in contact situations. Generally, secondary sign languages are not full-fledged natural sign languages but rather gestural communication systems, or ‘kinesic codes’ (Kendon 2004), with restricted uses and varying degree of elaboration. This crucial difference notwithstanding, the term ‘sign language’ will be used throughout this section. Four types of secondary sign languages will be considered in the following subsections: Sawmill Sign Language, monastic sign languages, Aboriginal sign languages of Australia, and Plains Indian Sign Language. In all subsections, an attempt will be made to provide information about the origin and use of the respective sign language, its users, and selected aspects of its structure. It should be pointed out at the outset, however, that the four sign languages addressed in this section are highly diverse from a linguistic and sociolinguistic point of view ⫺ possibly too diverse to justify subsuming them under a single label. I will get back to this issue in sections 5.4 and 5.5.
5.1. Sawmill Sign Language In section 3.1, I pointed out that simple gestural communication systems are sometimes used in settings that preclude oral communication (e.g. hunting, diving). Occasionally, such gestural codes may develop into more complex systems (see Figure 23.1). In this section, I will discuss a sign language which emerged in a saw mill, that is, in an extremely noisy working environment in which a smooth coordination of work tasks is required.
5.1.1. On the origin and use of Sawmill Sign Language According to Johnson (1977), a sawmill sign language ⫺ he also uses the term ‘industrial sign-language argot’ ⫺ has been used widely in the northwestern United States and western Canada. The best-documented case is a language of manual gestures spontaneously created by sawmill workers in British Columbia (Canada) (Meissner/Philpott 1975a,b). For one of the British Columbia mills, Meissner and Philpott describe a typical situation in which the sign language is used: the communicative interaction between
23. Manual communication systems: evolution and variation
529
Fig. 23.5: Layout of a section of British Columbia sawmill: the head saw, where slabs of wood are cut off the log (re-drawn from a sketch provided by Meissner/Philpott (1975a, 294)). Copyright for original sketch © 1975 by Gallaudet University Press. Reprinted with permission.
three workers at the head saw (see Figure 23.5, which is part of a figure provided by Meissner/Philpott (1975a, 294)). The head sawyer (➀ in Figure 23.5) controls the placing of the log onto the carriage while the tail sawyer (➂) guides the cants cut from the log as they drop on the conveyor belt. Both men face the setter (➁), who sits in a moving carriage above their heads, but they cannot see each other. The setter, who has an unobstructed view of the mill, controls the position of the log and co-operates with the head sawyer in placing the log. While the mill is running, verbal communication among the workers is virtually impossible due to the immense noise. Instead, a system of manual signs is used. Meissner and Philpott (1975a, 292) report that they “were struck by its ingenuity and elegance, and the opportunity for expression and innovation which the language offered under these most unlikely circumstances”. For the most part, signs are used for technical purposes, in particular, to make the rapid coordination of tasks possible. In one case, the head sawyer signed to the setter index1 push-button wrong tell lever-man (‘I pushed the wrong button. Tell the leverman!’) and, within seconds, the setter passed the message on to the leverman. In another case, one of the workers signed time change saw-blade (‘It’s time to change the blade’). Interestingly, however, it turned out that use of signs was not confined to the transmission of technical information. Rather, the workers also regularly engaged in personal conversations. The tail sawyer, for instance, would start with a gestural remark to the setter, which the setter, after moving his carriage, would pass on to the head sawyer, who in turn would make a contribution. Most of the observed personal exchanges involved terse joking (2a) ⫺ “all made with the friendliest of intentions” (Meissner/Philpott 1975a, 298) ⫺ or centered on topics such as cars, women (2b), and sports events (2c). (2)
a. index2 crazy old farmer ‘You crazy old farmer.’ b. index1 hear index2 woman knock^up ‘I hear your wife is pregnant.’
[Sawmill SL]
530
V. Communication in the visual modality c. how football go ‘How’s the football game going?’ When comparing sign use in five mills, Meissner and Philpott (1975a) observe that a reduction of workers due to increased automation leads to a decline in the rate of manual communication. They speculate that further automation will probably result in the death of the sign language. It thus seems likely that at present (i.e. 37 years later), Sawmill Sign Language is not used anymore. Johnson (1977) reports a single case of a millworker ⫺ in Oregon, not in British Columbia ⫺ who, after becoming deaf, used sign language to communicate with his wife and son. Johnson claims that this particular family sign language is an extension of the sawmill sign language used in southeast Oregon. Based on a lexical comparison, he concludes that this sign language is closely related to the Sawmill Sign Language described by Meissner and Philpott.
5.1.2. Lexicon and structure of Sawmill Sign Language Based on direct observation and consultation with informants, Meissner and Philpott (1975b) compiled a dictionary of 133 signs, 16 of which are number signs and eight specialized technical signs (e.g. log-not-tight-against-blocks). Some number signs may also refer to individuals; two, for instance, refers to the engineer and five to the foreman, corresponding to the number of blows on the steam whistle used as call signals. Not surprisingly, most of the signs are iconically motivated. The signs woman and man, for example, are based on characteristic physical properties in that they refer to breast and moustache, respectively. Other signs depict an action or movement, e.g. turning a steering wheel for car and milking a cow for farmer (2a). Interestingly, pointing to the teeth signifies saw-blade. Meissner and Philpott also describe “audiomimic” signs in which the form of the sign is motivated by phonological similarity of the corresponding English words: grasping the biceps for week (week ⫺ weak), grasping the ear lobe for year (ear ⫺ year), and use of the sign two in the compound two^day (‘Tuesday’; cf. the use of two (for ‘to’) and four (‘for’) described in the next section). The authors found various instances in which two signs are combined in a compound, such as woman^brother (‘sister’), fish^day (‘Friday’), and knock^up (‘pregnant’, cf. (2b)). At least for the first of these, the authors explicitly mention that the order of signs cannot be reversed. Also note that the first two examples are not loan translations from English. Pointing is used frequently for locations (e.g. over-there) and people; lip and face movements (including mouthings) may help in clarifying meanings. In order to disambiguate a name sign that could refer to several people, thumb pointing can be used. As for syntactic structure, the examples in (2) suggest that the word order of Sawmill Sign Language mirrors that of English. However, just as in many other sign languages, a copula does not exist. Depending on the distance between interlocutors, interrogatives are either introduced by a non-manual marker (raised eyebrows or backward jerk of head) or by the manual marker question, which is identical to the sign how (2c), which is articulated with a fist raised to above shoulder height, back of hand facing
23. Manual communication systems: evolution and variation outward. Meissner and Philpott do not mention the existence of grammatical nonmanual markers that accompany strings of signs, but they do point out that mouthing of a word may make a general reference specific. In conclusion, it appears that generally, “the sawmill sign language is little constrained by rules and open to constant innovation” (Meissner/Philpott 1975a, 300).
5.2. Monastic sign languages While noise was the motivation for development of the sawmill sign language discussed in the previous section, in this section, the relevant factor is silence. Silence plays a significant and indispensable role in monastic life. It is seen as a prerequisite to a life without sin. “The usefulness of silence is supremely necessary in every religious institute; in fact, unless it is properly observed, we cannot speak of the religious life at all, for there can be none” (Wolter 1962; cited in Barakat 1975, 78). Hence, basically all Christian monastic orders impose a law of silence on their members. However, only in a few exceptional cases, this law of silence is total. For the most part, it only applies to certain locations in the cloister (e.g. the chapel and the dormitory) and to certain times of the day (e.g. during reading hours and meals).
5.2.1. On the origin and use of monastic sign languages According to van Rijnberk (1953), a prohibition against speaking was probably imposed for the first time in 328 by St. Pachomius in a convent in Egypt. In the sixth century, St. Benedict of Nursia wrote “The Rule of Benedict”, an influential guide to Western monasticism, in which he details spiritual and moral aspects of monastic life as well as behavioral rules. Silence is a prominent feature in the Rule. In chapter VI (“Of Silence”), for instance, we read: “Therefore, because of the importance of silence, let permission to speak be seldom given to perfect disciples even for good and holy and edifying discourse, for it is written: ‘In much talk thou shalt not escape sin’ (Prov 10:19)” (Benedict of Nursia 1949). St. Benedict also recommends the use of signs for communication, if absolutely necessary (chapter XXXVIII: “If, however, anything should be wanted, let it be asked for by means of a sign of any kind rather than a sound”). Later, all of the religious orders that emerged from the order of St. Benedict ⫺ the Cistercians, Trappists, and Cluniacs ⫺ maintained the prescription of silence. A fixed system of signs came into appearance with the foundation of Cluny in the year 909 (Bruce 2007). In 1068, a monk named Bernard de Cluny compiled a list of signs, the Notitia Signorum. This list contains 296 signs, “a sizeable number which seems to indicate that many were in use before they were written down” (Barakat 1975, 89). Given an increasing influence of the Cluniacs from the eleventh century on, signs were adopted by other monasteries throughout Western Europe (e.g. Great Britain, Spain, and Portugal). It is important to point out that monastic sign languages were by no means intended to increase communication between monks in periods of silence. Rather, the limited inventory of signs results from the desire to restrict communication. “The administration of the Order has rarely seen fit to increase the sign inventory for fear of intrusion
531
532
V. Communication in the visual modality upon the traditional silence and meditative atmosphere in the monasteries” (Barakat 1975, 108) ⫺ one may therefore wonder why Barakat’s dictionary includes compound signs like wild+time (‘party’). Signs may vary from one convent to another but generally, as remarked by Buyssens (1956, 30 f.), the variation is limited “de sorte qu’un Trappiste de Belgique peut parfaitement se faire comprendre d’un Trappist de Chine”.
5.2.2. Lexicon and structure of Cistercian Sign Language The most thorough studies on monastic sign language to date are the ones by Barakat (1975) and Bruce (2007). Barakat studied the sign language as used by the monks of St. Joseph’s Abbey in Spencer, Massachusetts. His essay on the history, use, and structure of Cistercian Sign Language (CisSL) is supplemented by a 160-page dictionary, which includes photographs of 518 basic signs and the manual alphabet as well as lists describing derived (compound) signs, the number system, and signs for important saints and members of St. Joseph’s Abbey. In contrast, Bruce (2007) explores the rationales for religious silence and the development and transmission of manual forms of communication. His study contains some information on the Cluniac sign lexicon and the visual motivation of signs, but no further linguistic description of the language. In the following, I will therefore focus for the most part on the information provided by Barakat (but also see Stokoe (1978)). Many of the signs that are used reflect in some way the religious and occupational aspects of the daily lives of the brothers. Barakat distinguishes five different types of signs. First, there are the pantomimic signs. These are concrete signs which are easily understood because they either manually describe an object or reproduce actual body movements that are associated with the action the sign refers to. Signs like book and cross belong to the former group while signs like eat and sleep are of the latter type. Not surprisingly, these signs are very similar or identical to signs described for natural sign languages. Secondly, the group of pure signs contains signs that bear no relation to pantomimic action or speech. These signs are arbitrary and are therefore considered “true substitutes for speech, […] an attempt to develop a sign language on a more abstract and efficient level” (Barakat 1975, 103). Examples are god (two A-hands contact each other to form a triangle), day (@-hand contacts cheek), and yellow (Rhand draws a line from between eyebrows to tip of nose). Group three comprises what Barakat refers to as qualitative signs. Here, the relation between a sign and its meaning is associative, “roughly comparable to metaphor or connotation in spoken language” (p. 104). Most of the signs in this group are compounds. Geographical notions, for instance, generally include the sign courtyard plus modifier(s), as illustrated by the examples in (3). (3)
a. drink C T C courtyard (‘England’) b. red + courtyard (‘Russia’) c. secular + courtyard + shoot + president C K (‘Dallas, TX’)
[CisSL]
The examples also illustrate that use is made of handshapes from the manual alphabet: the ‘T’ in (3a) representing ‘tea’, the ‘K’ in (3c) as a stand-in for ‘Kennedy’ (note that this manual alphabet is different from the one used in ASL). Other illustrative exam-
23. Manual communication systems: evolution and variation ples of qualitative signs are mass + table (‘altar’), red + metal (‘copper’), and black + water (‘coffee’). The last two groups of signs are interesting because they include complex signs that are partially or completely dependent upon speech by exploiting homonymy (e.g. knee ⫺ ney, see below) as well as fingerspelling. Most of these signs are invented to fill gaps in the vocabulary. Barakat distinguishes between signs partially dependent on speech and speech signs, but the line between the two groups appears to be somewhat blurry. Clear examples of the former type are combinations that reflect derivational processes such as, for example, sing C R (‘singer’) and shine + knee (‘shiney’) ⫺ this is reminiscent of the phenomenon that Meissner and Philpott refer to as ‘audiomimic’ signs. In the latter group, we find combinations such as sin + sin C A C T (‘Cincinatti, Ohio’) and day C V (‘David’). Stokoe (1978) compares the lexicons of CisSL and ASL and finds that only one out of seven CisSL signs (14 %) resembles the corresponding ASL sign. It seems likely that most of these signs are iconic, that is, belong to the group of pantomimic signs. Stokoe himself points out that in many cases of resemblances, the signs may be related to ‘emblems’ commonly used in American culture (e.g. drive, telephone). Based on this lexical comparison, he concludes that CisSL and ASL are unrelated and have not influenced each other. Turning to morphology, there seems to be no evidence for morphological structure beyond the process referred to as compounding above and the derivational processes that are based on spoken language. But even in CisSL compounds, signs are merely strung together and there is no evidence for the phonological reduction or assimilation processes that are characteristic of ASL compounds (Klima/Bellugi 1979; see chapter 5, Word Classes and Word Formation, for discussion). Thus, the CisSL combination hard + water can be interpreted as ‘ice’ but also as ‘hard water’. In contrast, a genuine ASL compound like soft^bed can only mean ‘pillow’ but not ‘soft bed’. Barakat distinguishes between simple derived signs, which consist of a maximum of three signs, and compound signs, which combine more than three signs. Compound signs may be of considerable complexity, as shown by the examples in (4). Clearly, expressing that Christ met Judas in Gethsemane would be a cumbersome task. (4)
a. vegetable + courtyard + cross + god + pray + all + time [CisSL] ‘Gethsemane’ (= ‘the garden (vegetable + courtyard) where Christ (cross + god) prayed for a long time’) b. secular + take + three + O + white + money + kill + cross + god ‘Judas’ (= ‘man who took thirty pieces of silver (white + money) that killed Christ (cross + god)’)
While simple signs generally have a fixed constituent order, the realization of compound signs may “vary considerably from brother to brother because of what they associate with the places or events” (Barakat 1975, 114 f.). With respect to syntactic structure, Barakat (1975, 119) points out that, for the most part, the way signs are combined into meaningful utterances “is dependent upon the spoken language of the monks and the monastery in which they live”. Hence, in CisSL of St. Joseph’s Abbey, subject-verb-complement appears to be the most basic pattern. Index finger pointing may serve the function of demonstrative and personal pronouns,
533
534
V. Communication in the visual modality but only when the person or object referred to is in close proximity. Occasionally, fingerspelled ‘B’ and ‘R’ are used as the singular (7ab) and plural copula, respectively. Negation is expressed by the manual sign no, which occupies a pre-verbal position, just as it does in English (e.g. brother no eat). CisSL does not have a dedicated interrogative form. Barakat observes that yes/noquestions are preceded or followed by either a questioning facial expression or a question mark drawn in the air with the index finger. Also, there is only one question word that finds use in wh-questions; Barakat glosses this element as what and notes that it may combine with other elements to express specific meanings, e.g. what + time (‘when’) or what + religious (‘who’; literally ‘what monk’). Such simple or complex question signs always appear sentence-initially. From his description, we may infer that a questioning look is not observed in wh-questions. Finally, the expression of complex sentences including dependent clauses appears to be difficult in CisSL. “The addition of such clauses is one source of garbling in the language and most, if not all, the monks interviewed had some trouble with them” (Barakat 1975, 133). The complex sentence in (5a) is interesting in a couple of respects: first, the sign all is used as a plural marker; second, rule expresses the meaning of how, while the connective but is realized by the combination all + same; third, the sign two is used as infinitival to; and finally, the plural pronoun we is a combination of two indexical signs (Barakat 1975, 134). (5)
a. all monk know rule two give vegetable seed all same ix2 ix1 not know rule ‘The monks know how to plant vegetables but we don’t.’ b. wood ix2 give ix1 indulgence two go two work ‘Can I go to work?’
[CisSL]
Modal verbs are generally expressed by circumlocutions. Example (5b) shows that can is paraphrased as ‘Would you give me permission”, the sign wood being used for the English homonym would. As a sort of summary, I present part of The Lord’s Prayer in (6). Note again the use of the sign courtyard, of fingerspelled letters, of concatenated pronouns, and of the combination of four + give to express forgive. (6)
a. ix2 ix1 father stay god courtyard blessed B ix2 name [CisSL] ‘Our Father, who art in Heaven, hallowed be thy name;’ b. ix2 king courtyard come ix2 W B arrange ‘thy kingdom come, thy will be done,’ c. this dirt courtyard same god courtyard ‘on earth as it is in Heaven.’ d. give ix2 ix1 this day ix2 ix1 day bread ‘Give us this day our daily bread,’ e. four give ix2 ix1 sin same ix2 ix1 four give sin arrange fault ‘and forgive us our trespasses as we forgive those who trespass against us.’
Some of the solutions the Cistercians came up with appear rather ingenious. Still, it is clear that the structure is comparably simple and that there is a strong influence from the surrounding spoken language. Barakat stresses the fact that CisSL has traditionally
23. Manual communication systems: evolution and variation been intended only for the exchange of brief, silent messages, and that, due to its “many defects”, it can never be an effective means for communicating complex messages. He concludes that “[a]lthough this sign language, as others, is lacking in many of the grammatical elements necessary for expressing the nuances of thought, it does function very effectively within the context of the monastic life” (Barakat 1975, 144).
5.3. Aboriginal sign languages The use of complex gestural or sign systems by Aborigines has been reported for many different parts of Australia since the late 19th century. Kendon (1989, 32) provides a map indicating areas where sign language has been or still is used; for the sign languages still in use, symbols on the map also provide information on the frequency of use and the complexity of the system. The symbols suggest that the most complex systems are found in the North Central Desert area and on Cape York. Kendon himself conducted his research in the former area, with particular attention to the sign languages of the Warlpiri, Warumungu, and Warlmanpa (Kendon 1984, 1988, 1989). In his earlier studies, Kendon speaks about Warlpiri Sign Language (WSL ⫺ since all of the data were collected in the Warlpiri community of Yuendumu), but in his 1989 book, he sometimes uses the cover term North Central Desert Sign Languages (NCDSLs). Another study (Cooke/Adone 1994) focuses on a sign language used at Galiwin’ku and other communities in Northeast Arnhemland, which is referred to as Yolngu Sign Language (YSL). According to the authors, YSL bears no relation to the sign languages used in Central Australia (beyond some shared signs for flora, fauna, and weapons; Dany Adone, personal communication).
5.3.1. On the origin and use of Aboriginal sign languages Kendon (1984) acknowledges that NCDSLs may, in the first instance, have arisen for use during hunting, as is also suggested by Divale and Zippin (1977, 186), who point out that the coordination of activities during hunting “requires some system of communication, especially if the plans of the hunters are to be flexible enough to allow them to adapt to changing conditions of the chase” ⫺ clearly a context that would favor the development of a silent communication system that can be used over larger distances. Hunting, however, is certainly not the most important motivation for sign language use. Rather, NCDSLs are used most extensively in circumstances in which speech is avoided for reasons of social ritual (also see Meggitt 1954). As for the North Central Desert area, Kendon (1989) identifies two important ritual contexts for sign language use: (i) male initiation and, more importantly, (ii) mourning. At about 13 years of age, a boy is taken into seclusion by his sister’s husband and an older brother. After some initial ceremonies, he is taken on a journey, which may last two or three months, during which he learns about the topography of the region and acquires hunting skills. Following the journey, the boy undergoes circumcision and after he has been circumcised, he goes into seclusion again for another two to three months. During the first period of seclusion until after the circumcision, the boy is enjoined to remain silent. As pointed out by Meggitt (1975, 4), “novices during initia-
535
536
V. Communication in the visual modality tion ceremonies are ritually dead” and since dead people cannot speak, they should communicate only in signs. The extent to which a boy makes use of signs during that period, however, appears to vary. Finally, after circumcision, the boy is released from all communicative restriction in a special ceremony (Kendon 1989, 85 f.). A more important factor motivating sign use, however, are ceremonies connected with death and burial. In all communities studied by Kendon, speech taboos are observed during periods of mourning following the death of a group member. The taboo, however, applies only to women ⫺ in some communities only to the widow, in others also to other female relatives of the deceased. Duration of the speech taboo varies depending on factors such as “closeness of the relative to the deceased […] and the extent to which the death was expected” (Kendon 1989, 88) and may last up to one year (for widows). As in the case of male initiation, the taboo is lifted during a special ‘mouth opening’ ceremony. Findings reported in Kendon (1984) suggest that WSL is not common knowledge for all members of the Yuendumu community. Rather, use of WSL was mostly confined to middle-aged and older women. This is probably due to the fact that the most important context for sign use, mourning, is restricted to women, as is also supported by the observation that women who experienced bereavement showed better knowledge of WSL. Meggitt (1954) also notes that women generally know and use more signs than men do, but he adds as an additional factor that the use of signs allows women to gossip about topics (such as actual and probable love affairs) that are not meant for the husband’s ears. As for YSL, Cooke and Adone (1994, 3) point out that the language is used during hunting and in ceremonial contexts “where proximity to highly sacred objects demands quietness as a form of respect”; however, they do not mention mourning as a motivations for sign language use. Interestingly, they further suggest that in the past, YSL may have served as a lingua franca in extended family groups in which, due to compulsory exogamy, several spoken Yolngu languages were used (also see Warner 1937). Moreover, they point out that YSL is also used as a primary language by five deaf people (three of them children at the time) ⫺ a communicative function not mentioned by Kendon. Actually, the data reported in Cooke and Adone (1994) come from a conversation between a hearing and a deaf man (also see Kwek (1991) for use of sign language by and in communication with a deaf girl in Punmu, an Aboriginal settlement in the Western Desert region in Western Australia).
5.3.2. Lexicon and structure of Aboriginal sign languages According to Kendon (1984), WSL has a large vocabulary. He recorded 1,200 signs and points out that the form of the majority of signs is derived from depictions of some aspect of their meaning, that is, they are iconic (see chapter 18 for discussion). Often, however, the original iconicity is weakened or lost (as also observed by Frishberg (1975) for ASL). Kendon (1989, 161) provides the example of the sign for ‘mother’, in which the fingertips of a spread hand tap the center of the chest twice. It may be tempting to analyze this form as making reference to the mother’s breasts, but clearly this form “is not in any sense an adequate depiction of a mother’s breast”. Kendon
23. Manual communication systems: evolution and variation describes various strategies for sign creation, such as presenting (e.g. rdaka ‘hand’: one hand moves toward the signer while in contact with the other hand), pointing (e.g. langa ‘ear’: tip of @-hand touches ear), and characterizing (e.g. ngaya ‘cat’: ?-hand represents the arrangement of a cat’s paw pads). Often, characterizing signs cannot be understood without knowledge about certain customs. In the sign for ‘fully initiated man’, for instance, the u-hand is moved rapidly across the upper chest representing the horizontal raised scars that are typical for fully initiated men (Kendon 1989, 164). Interestingly, there are also signs that are motivated by phonetic characteristics of the spoken language. For instance, the Warlpiri word jija may mean ‘shoulder’ and ‘medical sister’ (the latter resulting from an assimilation of the English word sister). Given this homophony, the WSL sign for ‘shoulder’ (tapping the ipsilateral shoulder with the middle finger) is also used for ‘medical sister’ (Kendon 1989, 195). Similarly, in YSL, the same sign (i.e. touching the bent elbow) is used for ‘elbow’ and ‘bay’ because in Djambarrpuyngu, one of the dominant Yolngu languages, the term likan has both these meanings (Cooke/Adone 1994). Compounds are frequent in NCDSLs, but according to Kendon, almost all of them are loans from spoken languages. That is, in almost all cases where a meaning is expressed by a compound sign, the same compound structure is also found in the surrounding spoken language. Fusion (i.e. reduction and/or assimilation) of the parts is only occasionally observed; usually the parts retain their phonological identity. For instance, in Anmatyerre, we find the compound kwatyepwerre (‘lightning’), which is composed of kwatye (‘water’) and pwerre (‘tail’); in the sign language, the same meaning is also expressed by the signs for ‘water’ and ‘tail’ (Kendon 1989, 207). An example of a native compound is the sign for ‘heron’, which combines the signs for ‘neck’ (a pointing sign) and ‘tall’ (upward movement of @), yielding a descriptive compound that can be glossed as ‘neck long’ (the corresponding Warlpiri word kalwa is monomorphemic). See Kendon (1989, 212⫺217) for discussion of Warlpiri preverb constructions, such as jaala ya-ni (‘go back and forth’) that are rendered as two-part signs in WSL. Reduplication is a common morphological process in Australian languages, and it is also attested in the sign languages examined. As for nominal plurals, Kendon (1989, 202 f.) finds that nouns that are pluralized by means of reduplication in Warlpiri (especially nouns referring to humans, such as kurdu ‘child’) are also reduplicated in WSL (e.g. kurduCC), while signs that are pluralized by the addition of a suffix (-panu/ -patu) are pluralized in WSL by the addition of a quantity sign ( more grammatical (e.g., an auxiliary) ⫺ can be considered universal, such that grammatical items do not reverse their pathways of change to ultimately emerge as lexemes. This topic will be taken up once again in section 5 below, but for the present, we will begin by stating that grammaticalization refers to the process of the emergence of grammatical items (that then participate in grammatical categories such as tense or aspect marking, case marking, and the like) and not simply to the fact that something exists in the grammar of a language. Enough is understood from diachronic studies of grammaticalization for us to conclude that if something exists in the grammar of a language, even without clear diachronic evidence, we can presume that it got there somehow through a diachronic process of change, and has not appeared suddenly as a fully functional grammatical item (see Wilcox 2007). Lexicalization, on the other hand, refers generally to the process of the emergence of lexemes, or items listed in the lexicon of a language. Lexicalized items are regularized as institutionalized (community-wide) usages with particular lexical class features and constraints (Brinton and Traugott 2005; see also Haiman 1994). Word formation processes such as compounding and conversion are seen as inputs to lexicalization. Thus, lexicalization as a process of change equally does not mean simply that a word is lexical but rather that it is undergoing, or has undergone, such change in a principled way. For the purpose of the present discussion, definitions of both lexicalization and grammaticalization are taken from Brinton and Traugott (2005). Although there are differences in opinion on definitional specifics, these theoretical debates will not be undertaken here in the interest of space and of pointing the discussion in the direction of sign language evidence, but the reader is referred to such seminal work as Brinton and Traugott (2005), Bybee (2003), Bybee et al. (1994), Heine et al. (1991), Heine and Kuteva (2007), Heine and Reh (1984), Hopper (1991), Hopper and Traugott (2003), and others for detailed accounts of theoretical principles and language examples.
2.1. Lexicalization The definitions of lexicalization and grammaticalization adopted for the present discussion are from Brinton and Traugott (2005). Lexicalization is thus defined as follows (Brinton/Traugott 2005, 96): Lexicalization is the change whereby in certain linguistic contexts speakers use a syntactic construction or word formation as a new contentful form with formal and semantic properties that are not completely derivable or predictable from the constituents of the construc-
34. Lexicalization and grammaticalization tion or the word formation pattern. Over time there may be further loss of internal constituency and the item may become more lexical.
In a synchronic sense, Brinton and Traugott note, lexicalization has been taken to mean the coding of conceptual categories, but in a diachronic sense, lexicalization is the adoption of an item into the lexicon following a progression of change. Further, we may consider lexicalizations that are in fact innovations created for a particular, or local, discourse event, but which are neither institutionalized (i.e., conventionalized usages throughout the language community) nor listable in the lexicon. Such productive innovations are widely reported in the sign language literature, but here we will focus on the diachronic and institutional senses of lexicalization. Traugott and Dasher (2002, 283) define lexicalization as “a change in the syntactic category of a lexeme given certain argument structure constraints, e.g. use of the nouns calendar or window as verbs or […] the formation of a new member of a major category by the combination of more than one meaningful element, e.g. by derivational morphology or compounding”. Various word formation processes lead to lexicalization, including compounding and blending, derivation and conversion. A reanalysis that involves the weakening or loss of the boundary between words or morphemes leading to compounding is a type of lexicalization (Hopper/Traugott 2003), meaning that while reanalysis has often been thought of as a grammaticalization process, it does not take place solely within that domain. Brinton and Traugott (2005) refer to this as “fusion”, wherein individually definable features of compositionality are decreased in favour of the new whole. While the component parts contributing to a new lexical item lose their individual autonomy, the new lexical word gains an autonomy of its own. This fusion has also been referred to as “univerbation”, the “unification of two or more autonomous words to form a third; univerbation is also involved in lexicalizations of phrases into lexemes […] or of complex into simple lexemes” (Brinton/Traugott 2005, 68).
2.2. Grammaticalization Grammaticalization is the process whereby functional categories come into being, either when lexical items take on a grammatical function in certain constructions, or when items that are already grammatical in nature develop into further grammatical categories. Here, we adopt the definition of grammaticalization given by Brinton and Traugott (2005, 99): Grammaticalization is the change whereby in certain linguistic contexts speakers use parts of a construction with a grammatical function. Over time the resulting grammatical item may become more grammatical by acquiring more grammatical functions and expanding its host-classes.
Grammaticalization is thus a process of change in language whereby grammatical elements emerge and continue to evolve over time. Bybee (2003, 146) sees grammaticalization as “the process by which a lexical item or a sequence of items becomes a gram-
819
820
VII. Variation and change matical morpheme, changing in distribution and function in the process”. For example, when a verb of motion or of desire (e.g., meaning ‘go’ or ‘wish’) evolves into a future marker, it loses verb features (i.e., it is “decategorized” (Hopper 1991)) and emerges with an entirely different distribution in the syntax of the language as a grammatical marker. Grammaticalization is a gradual process wherein an item that is lexical in nature (or, as we have come to learn about sign languages, could be gestural in nature) participates in a construction that becomes increasingly grammatical in function, along with significant changes in meaning and potentially, but not necessarily, in form. Form changes are always in the direction of phonetic reduction and loss (e.g., I’m going to > I’m gonna > I’menna in spoken English). Meaning changes are from concrete and literal meanings to those more abstract and general (Brinton/Traugott 2005), sometimes referred to as “bleaching”, perhaps intended to mean semantic loss, but as Brinton and Traugott point out, this term is not particularly descriptive. Instead, they suggest that lexical content meaning is replaced by abstract, grammatical meaning. In the process of grammaticalization, older forms of the lexical source often remain viable in the language, with newer grammaticalizing usages “layering”, to use Hopper’s (1991) term. Older forms may or may not disappear. This usually results in a great deal of both formal and functional variation in usage. If we consider gestural sources apparent in sign languages, it is thus not coincidental that some items seem at times linguistic and at times gestural, but within the framework of grammaticalization, this is neither surprising nor problematic. Grammaticalization and lexicalization are not processes opposite from or in opposition to one another, however; rather, they are two developmental processes in language evolution of different sorts. Lexicalization, on the one hand, is responsible for the creation of new words (“adoption into the lexicon”: Brinton/Traugott 2005, 20) or of words used in new ways, such as a change in syntactic category. Grammaticalization, on the other hand, leads to the creation of new grammatical items (or constructions: Bybee 2003) from either lexical words or “intermediate” grammatical items or, as is the case for sign languages, from gestural sources without an intervening lexical word stage (Janzen/Shaffer 2002; Wilcox 2007). In fact, the two processes may frequently work in tandem, beginning with the creation of a new lexical word, which may itself have a truly gestural source, and from that, the later development of a grammatical morpheme. To date, most work on grammaticalization has looked at the evolution of spoken language grammar. These studies have revealed a number of theoretical principles that are thought to be universal (Bybee et al. 1994; Heine et al. 1991; Heine/Reh 1984; Hopper 1991; Hopper/Traugott 1993), leading Bybee (2001) to suggest that universals of change may in fact be more robust than synchronic language universals generally. If so, we might assume that the same grammaticalization processes take place in sign languages, which the work on ASL finish as a perfective and completion marker (Janzen 1995, 2003), topic constructions (Janzen 1998, 1999; Janzen/Shaffer 2002), a casemarker in Israeli SL (Meir 2003), negation in German Sign Language (DGS, Pfau/ Steinbach 2006), modals in several sign languages (Shaffer 2000, 2002, 2004; Wilcox/ Shaffer 2006; Wilcox/Wilcox 1995), and discourse markers in ASL (Wilcox 1998), among others, has begun to demonstrate. Whereas early work on grammaticalization suggested that metaphor was perhaps the most productive mechanism behind grammaticalizing elements (see Bybee et al.
34. Lexicalization and grammaticalization 2004), it is now evident that metonymy plays a crucial role as well. Both lexicalization and grammaticalization involve semantic change, but what takes place here is not quite the same for each process. Lexicalization commonly involves innovation, in which the new item appears rather abruptly (although for some items, for example the radical phonological change in some compounds, may not be abrupt). In grammaticalization, however, change occurs slowly over time, characterized by overlapping forms and variation until, most probably motivated by pragmatic inferencing (Traugott/Dasher 2002), new grammatical constructions arise in which older meanings have generalized and new ⫺ often dramatically reduced ⫺ phonological forms solidify.
3. Lexicalization in sign languages What it means to be “lexicalized” in sign languages is not a simple matter, because of a number of elements that are described below in this section in more detail, but primarily the issue has to do with a high level of productivity in many construction types. Johnston and Schembri (1999) offer a definition of lexemes in Australian Sign Language (Auslan) that is helpful in our attempt to distinguish a form that is lexicalized from one that is not (Johnston/Schembri 1999, 126): A lexeme in Auslan is defined as a sign that has a clearly identifiable and replicable citation form which is regularly and strongly associated with a meaning which is (a) unpredictable and/or somewhat more specific than the sign’s componential meaning potential, even when cited out of context, and/or (b) quite unrelated to its componential meaning potential (i.e., lexemes may have arbitrary links between form and meaning).
According to Sutton-Spence and Woll (1999), in British Sign Language (BSL), the number of lexical signs is relatively small. Sutton-Spence and Woll claim that lexical signs are those signs that can be listed in citation form where the meaning is clear out of context, or which are in the signer’s mental lexicon. They cite The Dictionary of British Sign Language/English (Brien 1992) as containing just 1,789 entries, which they suggest is misleading in terms of the overall lexicon of BSL because the productivity of signs not found in the core lexicon is the more important source of vocabulary. Johnston and Schembri contrast lexemes with other signs of Auslan that maintain at least some accessible componentiality in that meaningful component parts are still identifiable and contribute meaning to the whole. They suggest that handshape, location, movement, and orientation are “phonomorphemes” (Johnston/Schembri 1999, 118) that can individually be meaningful (e.g., a flat handshape identifying a flat surface) and that contribute to a vast productivity in formational/meaning constructions which are not fully lexicalized and thus are not in the lexicon proper of the language (see also Zeshan’s (2003) discussion of lexicalization processes in Indo-Pakistani Sign Language (IPSL), based largely on Johnston and Schembri’s criteria). So-called ‘classifier’ handshapes that participate in productivity and the creation of novel forms have been noted in numerous sign languages (see chapter 8). Supalla (1986), in one of the first descriptions of a classifier system in a sign language (ASL), states that these handshapes participate in the ASL class of verbs of motion and location. Supalla claims that signers can manipulate the participating handshape morpheme
821
822
VII. Variation and change in ways that suggest that they recognize that these handshapes are forms that are independent within the construction, and thus, under Johnston and Schembri’s definition, would not qualify as lexicalized. In these productive forms that depict entities and events (see Liddell 2003; Dudis 2004), handshapes, positioning and locations of the hands (and body), and movement are all dynamic, which means that they can be manipulated to reflect any number of shapes, movements, and interactions. Nonetheless, classifier forms have been seen as sources leading to lexicalization (Aronoff et al. 2003). Lexicalization takes place when a single form begins to take on specific meaning which, as Johnston and Schembri note, may not necessarily be predictable from the component parts. They list sister in Auslan as an example (Johnston/Schembri 1999, 129), articulated with an upright but hooked index finger tapping the nose; yet, the meaning of ‘sister’ is not seen as related to any act of tapping the nose with the finger or another hooked object. In form, lexicalized signs become more or less invariable. Slight differences in articulation do not alter the meaning. Further examples from Auslan are given in Figure 34.1, from Johnston and Schembri (1999).
picture ‘square shaped in vertical plane’
lock ‘turn small object in vertical surface’
meet ‘Two people approach each other’
Fig. 34.1: In Auslan, picture is a lexicalized tracing pattern, lock is a lexicalized handling classifier form, and meet is a lexicalized proform (Johnston/Schembri 1999, 128, their Figures 7⫺9). Copyright © 1999 by John Benjamins. Reprinted with permission.
Sign languages appear to allow for a wide range of related forms that stretch from those that are highly productive, fully componential, meaningful complexes to lexical forms as described above. This means that even though a lexicalized sign may exist, less lexicalized forms are possible that may take advantage of distinguishable component parts. Regarding this, Johnston and Schembri (1999, 129 f.) point out that “most sign forms which are lexicalized may still be used or performed in context in such a way as to foreground the meaning potential of one or more of the component aspects”. This potential appears to be greater for signed than for spoken languages. This is no doubt at least in part due to the iconic manipulability of the hands as articulators moving in space and the conceptual ability to represent things other than actual hands. Dudis (2004) refers to this as an aspect of body partitioning, which leads to a plethora of meaningful constructions at the signer’s disposal, and is one of the defining characteristics of sign languages.
34. Lexicalization and grammaticalization
3.1. meet in ASL, IPSL, and Auslan In a number of sign languages, for example, ASL, IPSL, and Auslan, a lexicalized sign meaning ‘to meet’ (see Figure 34.1 above) is articulated similarly, clearly lexicalized from a classifier form (the recent controversies concerning sign language classifiers are not discussed here, but see, for example, Schembri (2003) and Sallandre (2007)). The upright extended index finger as a classifier handshape is often called a “person classifier” (e.g., Zeshan 2003) but this may be over-ascribing semantic properties to it, even though prototypically, it may represent a person if just because we so frequently observe and discuss humans interacting. In the present context, Frishberg (1975, 715) prefers “one self-moving object with a dominant vertical dimension meets one selfmoving object with a dominant vertical dimension” when two such extended index fingers are brought together in space. A classifier construction such as this is highly productive, as Frishberg also notes, meaning that the approach actions of two individuals, whatever they might be (e.g., approaching, not approaching, one turning away from the other, etc.) can be articulated. However, in some contexts, this productivity is significantly diminished. Frishberg glosses her classifier description as meet. But since the form is in fact highly productive, the notion of ‘meeting’ would only apply in some contexts; thus the gloss is not appropriate for this classifier overall, and is reserved in the present discussion for the lexicalized form meet (‘to meet’). In the case of lexicalized meet, at least for ASL, the resulting meaning has little to do with the physical event of two people approaching one another, and more to do with initial awareness of the person, for example, in the context of ‘I met him in the sixties’. The lexicalized form has lost compositional importance as well, such that the path movements of the two hands do not align with located referents, that is, they are spatially arbitrary. Problematic, however, is that this lexicalized version glossed as meet appears to be the very end point of a continuum of articulation possibilities from fully compositional to fully lexicalized and non-productive. In ASL, for example, if the signer has the lexicalized meaning in mind, but it is at least somewhat tied to the physical event, the articulated path movement may not be fully arbitrary. This illustrates that productive and lexical categories may not be discreet, and thus explains why it is sometimes difficult to determine how lexicalized forms should be characterized. For Johnston and Schembri, there is the resulting practical dilemma of what should and should not be included as lexemes in a dictionary, which may contribute to a seemingly low number of lexemes in sign languages altogether.
3.2. Gesture and lexicon Sign language and gesture have long had an uneasy alliance, with much early formal analyses working to show that sign language is not just elaborate gesturing, such that the question of whether signers gesture at all has even been asked (Emmorey 1999; see also chapter 27 on gesture). More recently, some researchers have looked for potential links between gesture and sign language, partly due to a renewed interest in gestural sources of all human language in an evolutionary sense (e.g., Armstrong/
823
824
VII. Variation and change
Fig. 34.2: $ukriya: ‘thanks’ in IPSL (Zeshan 2000, 147). Copyright © 2000 by John Benjamins. Reprinted with permission.
Fig. 34.3: paisa: ‘money’ in IPSL (Zeshan 2000, 166). Copyright © 2000 by John Benjamins. Reprinted with permission.
Stokoe/Wilcox 1995; Armstrong/Wilcox 2007; Stokoe 2001; see also chapter 23, Manual Communication Systems: Evolution and Variation). One area of investigation has concerned the role that gesture plays in grammaticalization in sign languages as illustrated in section 4 below. Gesture has been noted as the source for signs in the lexicon of sign languages as well, although once again much work has attempted to show that gestural sources for lexical items give way to formal, arbitrary properties, especially in terms of iconicity (Frishberg 1975). However, it is undeniable that gestures are frequently such sources, even though no comprehensive study of this phenomenon has been undertaken. Here we illustrate the link between gesture and lexicon from one source, IPSL (Zeshan 2000), but others are noted in section 3.3 below. Zeshan (2000) cites examples of IPSL signs that are identical in form and meaning to gestures found among hearing people, but where usage by signers differs from usage by hearing gesturers in some way, for example, the gestures/signs for ‘thanks’ (Figure 34.2) and ‘money’ (Figure 34.3). Zeshan found that the gesture for ‘thanks’ is restricted in use to beggars, whereas the IPSL sign $ukriya: (‘thanks’) is unrestricted and used by anyone in any context. The gesture for ‘money’, once adopted into IPSL (labeled paisa: ‘money’ in Zeshan 2000) participates in signed complexes such as paisa:^dena: (‘give money’) when the sign is moved in a direction away from the signer (Zeshan 2000, 39). In contrast, the gestural form is not combinable nor can its form be altered by movement. Gestures such as these are likely widespread as sources for lexical signs, but as has been demonstrated for IPSL, as signers co-opt these gestures and incorporate them into the conventionalized language system, they conform to existing patterning within that language in terms of phonetic, morphological, and syntactic constraints and the properties of the categories within which they become associated.
3.3. Common word formation processes: compounding, conversion, and fingerspelling New word formation in sign languages is often innovative: signers combine iconic parts (handshapes, movements, etc.) into some new structure that, if the innovation is useful
34. Lexicalization and grammaticalization
Fig. 34.4: email in ASL. The dominant hand index finger moves away from the signer several times (adapted from Signing Savvy, http://www.signingsavvy.com/index.php; retrieved August 9, 2009). Image copyright © 2009, 2010 Signing Savvy, LLC. All rights reserved. Reprinted with permission.
throughout the language community, may be institutionalized and thus lexicalized. This may take place in response to changing technology, social and cultural changes, and education. As discussed at the beginning of section 3, such innovations are usually compositional, using metonymic representations of referent characteristics or properties. The item referred to may be quite abstract, thus the resulting representation may also be metaphoric in nature. While lexicalization is in progress, forms used in the community may be quite variable for a period of time until one form emerges as an institutionalized lexicalized sign. One recent ASL example is the sign email, shown in Figure 34.4. Other means of creating lexicon are also common, such as compounding and blending, conversion, derivation, etc. Borrowing may also contribute to the lexicon of a language, and in sign languages, this may be borrowing from another sign language or from a surrounding spoken language primarily through fingerspelled forms.
3.3.1. Compounding Compounding, as mentioned above, is a frequent source of new words in spoken languages, and this is no less true for sign languages (also see chapter 5, Word Classes and Word Formation). Johnston and Schembri (1999) draw a distinction between productive compounding, whereby two nominal lexemes are articulated phonologically as a compound that fits a more local discourse purpose, and lexicalized compounds which become operational as new, distinct lexical items, often with phonological structure that differs radically from the simple combination of the two source lexemes, and a meaning that may or may not reflect the source lexeme meanings and may be quite arbitrary. Drawing on the work of Klima and Bellugi (1979) and others, Sutton-Spence and Woll (1999, 102) state that in lexicalized BSL compounds (e.g., ‘blood’ from red^flow; ‘people’ from man^woman; ‘check’ from see^maybe):
825
826
VII. Variation and change ⫺ the initial hold of the first sign is lost; ⫺ any repeated movement in the second sign is lost; ⫺ the base hand of the second sign is established at the point in time when the first sign starts; ⫺ there is rapid transition between the first and second sign; ⫺ the first sign is noticeably shorter than the second. Although there are differences cross-linguistically, these observations are fairly indicative of lexicalized compounding across sign languages. Johnston and Schembri suggest, however, that the resulting forms in lexicalized compounding in Auslan may best be referred to as blends.
3.3.2. Conversion and derivation Changes in lexical category by conversion or derivation in sign languages are most often noted in relation to noun-verb pairs, discussed extensively for ASL by Supalla and Newport (1978) but also noted in a number of other sign languages such as Auslan (Johnston/Schembri 1999), in which the category of noun or verb is indicated by different movement patterns (see chapter 5, Word Classes and Word Formation, for further discussion). But Johnston and Schembri also point out that a noun and a verb within a given pair, no matter whether they are examples of conversion or whether one category is derived from the other, cannot be considered as independent words and thus are not entered into the lexicon separately.
3.3.3. Lexicalized fingerspelling In a number of sign languages, some very commonly fingerspelled words have become stylized and often considerably reduced in complexity so as to be considered as lexicalized signs. Battison (1978) shows that the list of lexicalized fingerspellings in ASL ⫺ ‘fingerspelled loan signs’ in his terminology ⫺ is quite extensive. Lexicalized fingerspellings are typically quite short, between two and five letters, and are often reduced to essentially the first and last letters; for example, b-a-c-k in ASL becomes b-k. Evidence that these forms are not simply reduced fingerspellings comes from the observation that they can take on features of other lexemes and participate in grammatical constructions. b-k, for example, can move in the direction of a goal. Brentari (1998) observes that frequently such lexicalized fingerspellings reduce in form to conform to a general constraint on handshape aperture change within monomorphemic signs, that is, there will be only one opening or one closing aperture change. The lexicalization of the fingerspelled b-u-t in ASL, for instance, involves the reduction of the overall item to a B (u) handshape (often characterized by a lack of tenseness such that the handshape appears as a slightly lax 5 ( lexical item > grammatical item pathway, while others have bypassed a lexical stage, instead taking the route of gesture > grammatical item, without an intervening lexical stage whatsoever. As mentioned above, the ability to recognize gestural sources has recently been acknowledged as an important insight in our understanding of grammaticalization generally (Heine/Kuteva 2007). In some (but not all) contexts, grammatical markers have appeared as affixes in spoken languages, such as is frequently the case with tense and aspect markers. In contrast, clearly definable affixation has not been reported often for sign languages, suggesting that this is not automatically the place to look for grammatical material. Issues surrounding the lack of affixation in sign languages will not be taken up here, partly because such affixation may not yet be very well understood (should, for example, as Wilcox (2004) suggests, co-occurring grammatical items articulated as facial gestures be considered a kind of affixation because they appear to be bound morphemes dependent on what is articulated with the hands?), and because developing
827
828
VII. Variation and change grammar does not necessarily depend on affixation even in a traditional sense. However, one account of affixing in Israeli SL is found in Meir and Sandler (2008, 49 f.), and Zeshan (2004) reports that affixation occurs in her typological survey of negation in sign languages. Below we look at examples from each of the two routes of grammaticalization as outlined by Wilcox (2007).
4.1. finish in ASL finish in ASL has been shown to have developed from a fully functioning verb to a number of grammatical usages, including a completive marker and a perfective marker, which may in fact qualify as an affix (Janzen 1995). The fully articulated two-handed form of finish is shown in Figure 34.5. As a perfective marker, indicating that something has taken place in the past, the form is reduced phonologically to a one-handed sign which is articulated with a very slight flick of the wrist. When used as a perfective marker, finish always appears pre-verbally. In its completive reading, the sign may be equally reduced, but it is positioned either post-verbally or clause-finally. Interestingly, ASL signers report that finish as a full verb is nowadays rare in signers’ discourse. The grammatical use of finish in ASL has not reached inflectional status in that it does not appear to be obligatory. Also, Janzen (1995) does not report a gestural source for this grammaticalized item, but it is possible that such an iconic gestural element does exist, thus it demonstrates Wilcox’s gesture > lexical item > grammatical item route of development. An additional grammaticalized use of finish in ASL is that of a conjunction (Janzen 2003), as illustrated in (1) (note that ‘(2h)’ indicates a two-handed version of a normally one-handed sign; // signifies a pause; CCC indicates multiple movements).
(1)
top [ASL] go(2h) restaurant // eat+++ finish take-advantage see train arrive ‘(We) went to a restaurant and ate and then got a chance to go and see a train arrive.’
Fig. 34.5: finish in its fully articulated form, with (a) as the beginning point and (b) as the end point. The reduced form employs only one hand and has a much reduced wrist rotation (Janzen 2007, 176). Copyright © 2007 by Mouton de Gruyter. Reprinted with permission.
34. Lexicalization and grammaticalization
829
In this case, finish is topic-marked (see section 4.5 for further discussion of topic marking), functioning neither as a completive marker on the first clause, nor as an informational topic. Rather, in this example, the manual sign and the non-manual marker combine to enable finish to function as a linker meaning ‘and then’. Additional such topic-marked conjunctions are discussed in Janzen, Shaffer, and Wilcox (1999).
4.2. Completive aspect in IPSL Zeshan (2000) reports a completive marker in IPSL, labeled ho_gaya: (see Figure 34.6), which appears rather consistently in sentence-final position, and which may even accompany other lexical signs that themselves mean ‘to end’, as in (2) from Zeshan (2000, 63).
Fig. 34.6: The IPSL completive aspect marker ho_gaya: (Zeshan 2000, 39). Copyright © 2000 by John Benjamins. Reprinted with permission.
(2)
xatam(a) ho_gaya: end compl ‘(The affair) ended (without result).’
[IPSL]
Therefore Zeshan claims that ho_gaya: has only a grammatical and no lexical function. In contrast to ASL finish, ho_gaya: has a gestural source that is identical in form and means to ‘go away’ or ‘leave it’ (Zeshan 2000, 40). Since there is no evidence that a lexical sign based on this gesture ever existed in IPSL, this may be considered as an example of Wilcox’s second route to grammar, with no intervening lexical stage (for discussion of aspectual markers, see also chapter 9).
4.3. future in LSF and ASL The marker of futurity in both modern French Sign Language (LSF) and modern ASL has been shown to have developed from a gestural source which has been in use from
830
VII. Variation and change
Fig. 34.7: (a) The French gesture meaning ‘to depart’ (Wylie 1977, 17); (b) Old LSF depart (Brouland 1855).
at least classical antiquity onward and still in use today in a number of countries around the Mediterranean (Shaffer 2000; Janzen/Shaffer 2002; Wilcox 2007). Bybee et al. (1994) note that it is common for future markers in languages to develop out of movement verb constructions (as in be going to > gonna in English), verbs of desire, and verbs of obligation. De Jorio (2000 [1832]) describes a gesture in use at least 2000 years ago in which the palm of one hand is held edgewise moving out from underneath the palm of the other hand to indicate departure. This gesture is shown in Figure 34.7a, from Wylie (1977), a volume on modern French gestures. An identical form is listed for the Old LSF sign depart in Brouland (1855); see Figure 34.7b. Shaffer (2000) demonstrates that shortly after the beginning of the 20th century, a similar form ⫺ although with the dominant, edgewise hand moving outward in an elongated path ⫺ was in use in ASL to mean both ‘to go’ and ‘future’. Because this historical form of the lexical verb go and the form future (perhaps at this stage also a lexical form) co-existed in signers’ discourse, this represents what Hopper (1991) calls ‘layering’, that is, the co-existence of forms with similar shapes but with different meanings and differing in lexical/grammatical status. The elongated movement suggests movement along a path. At some point in time, then, two changes took place. First, the lexical verb go having this form was replaced by an unrelated verb form ‘to go’ and second, the sign future moved up to the level of the cheek, perhaps as an instance of analogy in that it aligned with other existing temporal signs articulated in the same region. Analogy in this respect has been considered as a motivating force in grammaticalization (Fischer 2008; Itkonen 2005; Krug 2001). This change to a higher place of articulation may have been gradual: Shaffer found examples of usages in LSF at an intermediate height. Once at cheek-level, only a future reading is present; this form cannot be used as a verb of motion, a change that took place both in ASL (Figure 34.8a) and LSF (Figure 34.8b). The future marker has thus undergone decategorialization as it moved along a pathway from full verb to future marker, which in modern ASL appears pre-verbally. The forms illustrated in Figure 34.8 also represent a degree of phonological reduction. Brentari (1998) states that the articulation of signs is phono-
34. Lexicalization and grammaticalization
Fig. 34.8 (a) future in modern ASL; (b) future in modern LSF (both from Shaffer 2000, 185 f.). Copyright © 2000 by Barbara Shaffer. Reprinted with permission.
logically reduced when the fulcrum of movement is distalized. In the LSF and ASL future marker, the fulcrum has shifted from the shoulder to the elbow, and in the most reduced forms, to the wrist. As is frequently the case with grammaticalizing items, multiple forms can co-exist, often with accompanying variation in phonological form. The form illustrated in Figure 34.8a can appear clause-finally in ASL as a free morpheme with both future and intentionality meanings. The path movement can vary according to the perceived distance in future time: a short path for an event in the near future, a longer path for something in the distant future (note that deictic facial gestures usually accompany these variants, but these are not discussed here). In addition, the movement can also vary in tenseness depending on the degree of intentionality or determination. In ASL, this future marker can also occur pre-verbally, although much of the variation in form seen in the clause-final marker does not take place. The movement path is shortened, perhaps with just a slight rotation of the wrist. The most highly reduced form appears prefix-like, with the thumb contacting the cheek briefly followed by handshape and location assimilation to that of the verb. In this case, the outward movement path of future is lost altogether. The grammaticalization pathway of the future marker in LSF and ASL is one of the clearest examples we find of the pathway gesture > lexical item > grammatical item, based on evidence of usage at each stage of development.
4.4. Negative headshakes as grammatical negation A negating headshake is reported as a facial/head gesture in a number of sign languages such as DGS (Pfau 2002; Pfau/Quer 2002; Pfau/Steinbach 2006) and many others (see Zeshan (2004) for a typological survey). Negative headshakes occur commonly across many cultures, either as co-speech or freestanding gestures, but as grammaticalized items in sign languages, they become regularized as part of specific constructions. Pfau and his colleagues indicate that a negative headshake can be the sole negator of the clause, and that there are language-specific constraints concerning its co-occurrence with manual signs (see also chapter 15 on negation). In ASL, the negating headshake
831
832
VII. Variation and change may co-occur either with or without a negative particle articulated on the hands. In Pfau’s (2002) examples of negation in DGS, the negative headshake (hs) occurs along with the verb alone or with verb plus negative particle, as in (3) (Pfau 2002, 273). Optionally, the headshake may spread onto the direct object. hs
(3)
hs
mutter blume kauf (nicht) mother flower buy.neg (not) ‘Mother does not buy a flower.’
[DGS]
The grammaticalization pathway of negative headshake gesture > grammatical negative marker again illustrates Wilcox’s route in which a lexical item does not intervene. Pfau and Steinbach (2006) refer to McClave (2001) and Kendon (2002) for a survey of the headshake as a gestural source in language use.
4.5. Topic constructions in ASL The use of topic-comment structure has been reported in numerous sign languages. For ASL, Janzen (1998, 1999, 2007) and his colleagues (Janzen/Shaffer 2002; Janzen/ Shaffer/Wilcox 1999) have shown that topic marking developed along the pathway given in (4): (4)
generalized questioning gesture > yes/no question marking > topic marking
As a widespread gesture used to enquire about something, the eyebrows are typically raised and eyes wide open, and the hands may also be outstretched with palms up. It is important to note that this gesture is typically used when the focus is identifiable to both interlocutors, such as a bartender pointing at a bar patron’s empty glass and using the facial gesture to enquire about another drink. Yes/no-questions in ASL (and many other sign languages) are articulated with the same facial gesture, possibly along with a forward head-tilt, which likely has a gestural source as well. A head-tilt forward in interlocution signals attentiveness or interactional intent: the questioner is inviting a response. Note that in a yes/no-question, too, the basic information typically being asked about is something identifiable to the addressee, who is asked to respond either positively or negatively (e.g., Is this your book?). When accompanying a topic-marked phrase in ASL, the facial gesture may still appear very much like a yes/no-question, but in this case, it does not function interactively, but rather marks grounding information upon which to base some comment as new information (also see chapter 21, Information Structure). Raised eyebrows mark the topic phrase as well, although the head-tilt may be slightly backward or to the side, rather than forward. Janzen (1998) found that topic phrases could be noun phrases, temporal adverbial and locative phrases, or whole clauses (which may consist of a verb only, since subjects and objects may not be overt). Topics appear sentence-initially and are followed by one or more comments (but note the further grammaticalized topicmarked finish described in section 4.1 above that links preceding and following clauses). Topic constructions contain shared or identifiable information and, even
34. Lexicalization and grammaticalization
833
though they pattern like yes/no-questions, do not invite a response; thus the interactive function of the yes/no-question has been lost (which may also explain the loss of the forward head-tilt). An example of a simple topic-comment structure in ASL is given in (5) (Janzen 2007, 181), with the facial (non-manual) topic marker shown in Figure 34.9. Once again, no lexical stage intervenes between the gestural source and the grammaticalized item. top
(5)
tomorrow night work ‘I work tomorrow night.’
[ASL]
4.6. Evidentials in Catalan Sign Language (LSC) Evidentials represent another area that illustrates Wilcox’s (2007) first type of grammaticalization pathway, where a gesture first acts a source for a lexical item, which subsequently evolves into a grammatical usage. Wilcox and Wilcox (1995) report on evidentials in ASL such as mirror, which has as its gestural source the representation of holding a mirror to one’s face, and which then has evolved into a modal function often glossed as seem. Wilcox (2007) illustrates the same pathway with an interesting and elaborate set of evidentials in LSC such as that given in Figure 34.10. Here, a gesture indicating the face has evolved in LSC as the item remble (‘resemble’), but with the grammatical meaning of subjective belief that something is the case based on some sort of evidence, as illustrated in (6) (Wilcox 2007, 114; slightly adapted). (6)
resemble index3 today come no ‘It seems that she’s not coming today.’
[LSC]
Increased subjective stance is one well-documented marker of grammaticalized forms (Traugott 1989; Traugott/König 1991; Brinton/Traugott 2005).
Fig. 34.9: Facial gestures marking the topic phrase tomorrow night in the ASL example (5) (Janzen 2007, 180). Copyright © 2007 by Mouton de Gruyter. Reprinted with permission.
834
VII. Variation and change
Fig. 34.10: resemble (remble) in LSC (Wilcox 2007, 113). Copyright © 2007 by Mouton de Gruyter. Reprinted with permission.
5. The relationship between lexicalization and grammaticalization: Some issues for sign languages Both processes of lexicalization and grammaticalization involve some change in meaning, but in different directions. Brinton and Traugott (2005, 108) suggest that items “that can undergo grammaticalization tend to have quite general meanings (e.g., terms for ‘thing,’ ‘go,’ ‘come,’ ‘behind’), while items that lexicalize often have highly specialized meaning (e.g., black market)”. In grammaticalization but not in lexicalization, lexical meaning loses ground and an operational meaning emerges or is “foregrounded” (Wischer 2000, 365). Lexicalization and grammaticalization are seen as processes that differ in their direction of change: lexicalization is a change toward the establishment of lexical entries while grammaticalization is a change toward the emergence of items within grammatical categories. Still, these processes do share some properties. Brinton and Traugott (2005, 101), for example, give evidence that both grammaticalization and lexicalization […] are subtypes of language change subject to general constraints on language use and acquisition. Lexicalization involves processes that combine or modify existing forms to serve as members of a major class, while grammaticalization involves decategorialization of forms from major to minor word class and/or from independent to bound element to serve as functional forms. Both changes may involve a decrease in formal or semantic compositionality and an increase in fusion.
Although lexicalization and grammaticalization differ significantly, there remain some questions regarding a possible relationship between the two processes. It was mentioned in section 1, for example, that an item may emerge through lexicalization which then participates in a grammaticalizing construction. Lexicalization is not the “reverse” of grammaticalization, however, as is sometimes suggested (see for example, Zeshan 2003, 132), but it is not clear that the principle of unidirectionality in grammaticalization always holds, as suggested in 5.1 below. Further, once certain signs have lexicalized, signers may still have access to their “parts”, in effect “de-lexicalizing” them, as discussed in section 5.2.
34. Lexicalization and grammaticalization
5.1. The case of classifier predicates and the problem of directionality Johnston and Schembri (1999) suggest that some lexicalized forms may easily give way to more productive usages, as discussed in section 3 above. But what is the evolutionary relationship between the variable classifier forms (if classifiers are understood as grammatical categories) and invariable lexical forms? That is, which came first? The principle of unidirectionality tells us that grammatical change takes place in the direction of lexical > grammatical, but we might consider that lexemes such as meet (discussed in section 3.1 above) and chair (as in the chair/sit noun/verb pair; see Supalla/Newport 1978), among numerous other examples, solidify out of more productive possibilities that include classifier handshapes or property markers. Items such as meet and chair may be cases of what Haspelmath (2004, 28) terms “antigrammaticalization”, a type of change that goes in the opposite direction of grammaticalization, that is, from discourse to syntax to morphology. Haspelmath makes clear that this does not mean grammaticalization in reverse, in that a grammaticalized element progressively devolves back to its lexical source form (which presumably would have at one time been lost). Rather, we are dealing with a process where a form more lexical in nature develops from a more grammatical source. Thus we might conclude that lexical items like meet and chair have emerged from a wide range of variable classifier verb forms as specific, morphologically restricted signs because they encode prototypical event schemas. This may be more plausible than concluding that the variable classifier forms have grammaticalized from the lexical sources meet and chair, as would normally be expected in a grammaticalization pathway, but further work in this area is needed.
5.2. Can lexicalized signs be “de-lexicalized”? Even though lexicalized signs are not meaningfully dependent on their componential parts, it seems that signers may evoke these parts in novel ways. The ASL lexeme tree, for example, is fully lexicalized in that the upright forearm and the open handshape with spread fingers together form a highly schematized articulation because the actual referent may have none of the features suggested by the form: there is no requirement that the referent tree labeled by the noun phrase has a straight, vertical trunk, nor that it has five equally spaced branches. Neither the signer nor the addressee will be concerned with discrepancies between the sign and the actual features of the referent tree because of the schematic and symbolic nature of the lexeme. And yet, should a signer wish to profile a certain part of the referent tree, the sign may be decomposed at least to some extent, say by pointing to the forearm in reference to the trunk only, or by referring to one of the fingers as representing one of the actual tree’s branches. Thus the whole may be schematized, but unlike monomorphemic words of spoken languages, the parts are still evocable if needed. Johnston and Schembri (1999, 130) suggest that this is a “de-lexicalization” process, and Brennan (1990) describes it as dormant iconic features becoming revitalized. Helpful to this discussion is the suggestion (Eve Sweetser, personal communication) that in a semantic analysis involving “mental spaces” (see, for example, Fauconnier 1985) the visible parts of articulated signs such as tree in a sign language are cognitively mapped onto the interlocutors’ mental image
835
836
VII. Variation and change of the referent, a mapping that is not available to speakers and hearers of a spoken language; thus the components within the lexeme are more easily available to users of sign languages. For signs that are claimed to have lost any connection to their iconic motivation, such as Auslan sister (Johnston/Schembri 1999, 129), decomposition is less available. However, in Johnston and Schembri’s sense, de-lexicalization must refer to individual instantiations of decomposition, and not to a change that affects the lexical item in an institutionalized way across the language community. Furthermore, we could not suggest that, just because such decompositional referencing is possible for some lexeme, the signer’s mental representation of the lexeme or its inclusion in the lexicon has weakened.
6. Conclusions Despite the material difference between sign and spoken languages due to differences in articulation, or modality, we see evidence that sign languages change over time along principles similar to those governing changes in spoken languages. Language change along these lines in sign languages is beginning to be explored, which will undoubtedly tell us much about sign language typology and about the processes of change in language generally, no matter the modality of use. The domains of lexicon and grammar in sign languages are still not well understood, but as more sign languages are described, more information about how these domains are formed will emerge. There remain some challenges, however. For example, the vast productivity and variation in form in sign languages, with relatively smaller numbers of lexemes (at least as far as are usually considered as dictionary entries) make it difficult to know at what stage lexicalization takes place and how stable lexicalized forms are. It is not certain whether the productivity discussed in this chapter is apparent because sign languages tend to be relatively young languages and thus, as they evolve, whether their lexicons expand over time, or whether the principles behind compositionality and productivity pull word formation in a direction away from a solidified or ‘frozen’ lexicon. Then, too, what may be the extensiveness of grammaticalization in sign languages? As Johnston and Schembri (2007) point out, grammaticalization often takes centuries to unfold, and the youth of most sign languages may mean that many aspects of grammaticalization are newly underway. If this is the case, however, we might expect to find that many grammatical categories are at or near the beginning stages of their development, but this has not been established as fact. The visual nature of language structure has given us a different sense of what combinatorial features in both lexicon and grammar might be like, and research on both areas often reveals a vast complexity in both structure and function. A new surge of interest in the relationship between gesture and language altogether suggests that much can be learned from examining gestural sources in both lexicalization and grammaticalization in sign language (Wilcox 2007). Such gestures are not ‘hearing people’s’ gestures, they belong to deaf people, too, and evidence is mounting that they are integral to both lexicalization and grammaticalization patterns in sign languages.
34. Lexicalization and grammaticalization
7. Literature Armstrong, David F./Stokoe, William C./Wilcox, Sherman E. 1995 Gesture and the Nature of Language. Cambridge: Cambridge University Press. Armstrong, David F./Wilcox, Sherman E. 2007 The Gestural Origin of Language. Oxford: Oxford University Press. Aronoff, Mark/Meir, Irit/Padden, Carol/Sandler, Wendy 2003 Classifier Constructions and Morphology in Two Sign Languages. In: Emmorey, Karen (ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 53⫺84. Battison, Robbin 1978 Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press. Brennan, Mary 1990 Word-formation in British Sign Language. Stockholm: Stockholm University Press. Brennan, Mary 2001 Making Borrowings Work in British Sign Language. In: Brentari, Diane (ed.), Foreign Vocabulary in Sign Languages: A Cross-Linguistic Investigation of Word Formation. Mahwah, NJ: Lawrence Erlbaum, 49⫺85. Brentari, Diane 1998 A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press. Brien, David (ed.) 1992 The Dictionary of British Sign Language/English. London: Faber & Faber. Brinton, Laurel J./Traugott, Elizabeth Closs 2005 Lexicalization and Language Change. Cambridge: Cambridge University Press. Brouland, Joséphine 1855 Langage Mimique: Spécimen d’un Dictionaire des Signes. Washington, DC: Gallaudet University Archives. Bybee, Joan 2001 Phonology and Language Use. Cambridge: Cambridge University Press. Bybee, Joan 2003 Cognitive Processes in Grammaticalization. In: Tomasello, Michael (ed.), The New Psychology of Language, Volume 2: Cognitive and Functional Approaches to Language Structure. Mahwah, NJ: Lawrence Erlbaum, 145⫺167. Bybee, Joan/Perkins, Revere/Pagliuca, William 1994 The Evolution of Grammar: Tense, Aspect, and Modality in the Languages of the World. Chicago: The University of Chicago Press. Dudis, Paul G. 2004 Body Partitioning and Real-space Blends. In: Cognitive Linguistics 15(2), 223⫺238. Emmorey, Karen 1999 Do Signers Gesture? In: Messing, Lynn/Campbell, Ruth (eds.), Gesture, Speech, and Sign. New York: Oxford University Press, 133⫺159. Fischer, Olga 2008 On Analogy as the Motivation for Grammaticalization. In: Studies in Language 32(2), 336⫺382. Fauconnier, Gilles 1985 Mental Spaces. Cambridge, MA: MIT Press. Frishberg, Nancy 1975 Arbitrariness and Iconicity: Historical Change in American Sign Language. In: Language 51, 696⫺719. Haiman, John 1994 Ritualization and the Development of Language. In: Pagliuca, William (ed.), Perspectives on Grammaticalization. Amsterdam: Benjamins, 3⫺28.
837
838
VII. Variation and change Haspelmath, Martin 2004 On Directionality in Language Change with Particular Reference to Grammaticalization. In: Fischer, Olga/Norde, Muriel/Perridon, Harry (eds.), Up and down the Cline ⫺ The Nature of Grammaticalization. Amsterdam: Benjamins, 17⫺44. Heine, Bernd/Claudi, Ulrike/Hünnemeyer, Friederike 1991 Grammaticalization: A Conceptual Framework. Chicago: University of Chicago Press. Heine, Bernd/Kuteva, Tania 2007 The Genesis of Grammar: A Reconstruction. Oxford: Oxford University Press. Heine, Bernd/Reh, Mechthild 1984 Grammaticalization and Reanalysis in African Languages. Hamburg: Helmut Buske Verlag. Hopper, Paul 1991 On Some Principles of Grammaticization. In Traugott, Elizabeth Closs/Heine, Bernd (eds.), Approaches to Grammaticalization, Volume I: Focus on Theoretical and Methodological Issues. Amsterdam: Benjamins, 149⫺187. Hopper, Paul/Traugott, Elisabeth Closs 2003 Grammaticalization (2nd Edition). Cambridge: Cambridge University Press. Itkonen, Esa 2005 Analogy as Structure and Process. Amsterdam: Benjamins. Janzen, Terry 1995 The Polygrammaticalization of FINISH in ASL. MA Thesis, University of Manitoba, Winnipeg. Janzen, Terry 1998 Topicality in ASL: Information Ordering, Constituent Structure, and the Function of Topic Marking. PhD Dissertation, University of New Mexico, Albuquerque. Janzen, Terry 1999 The Grammaticization of Topics in American Sign Language. In: Studies in Language 23(2), 271⫺306. Janzen, Terry 2003 finish as an ASL Conjunction: Conceptualization and Syntactic Tightening. Paper Presented at the Eighth International Cognitive Linguistics Conference, July 20⫺25, 2003, Logroño, Spain. Janzen, Terry 2007 The Expression of Grammatical Categories in Signed Languages. In: Pizzuto, Elena/ Pietrandrea, Paola/Simone, Raffaele (eds.), Verbal and Signed Languages: Comparing Structures, Constructs and Methodologies. Berlin: Mouton de Gruyter, 171⫺197. Janzen, Terry/Shaffer, Barbara 2002 Gesture as the Substrate in the Process of ASL Grammaticization. In: Meier, Richard/ Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 199⫺223. Janzen, Terry/Shaffer, Barbara/Wilcox, Sherman 1999 Signed Language Pragmatics. In: Verschueren, Jef/Östman, Jan-Ola/Blommaert, Jan/ Bulcaen, Chris (eds.), Handbook of Pragmatics, Installment 1999. Amsterdam: Benjamins, 1⫺20. Johnston, Trevor/Schembri, Adam 1999 On Defining Lexeme in a Signed Language. In: Sign Language & Linguistics 2(2), 115⫺185. Johnston, Trevor/Schembri, Adam 2007 Australian Sign Language: An Introduction to Sign Language Linguistics. Cambridge: Cambridge University Press.
34. Lexicalization and grammaticalization Jorio, Andrea de 2000 [1832] Gesture in Naples and Gesture in Classical Antiquity: A Translation of La mimica degli antichi investigata nel gestire napoletano, Gestural Expression of the Ancients in the Light of Neapolitan Gesturing, and with an Introduction and Notes by Adam Kendon (translated by Adam Kendon). Bloomington, IN: Indiana University Press. Kendon, Adam 2002 Some Uses of the Headshake. In: Gesture 2(2), 147⫺182. Klima, Edward S./Bellugi, Ursula 1979 The Signs of Language. Cambridge, MA: Harvard University Press. Krug, Manfred G. 2001 Frequency, Iconicity, Categorization: Evidence from Emerging Modals. In: Bybee, Joan/ Hopper, Paul (eds.), Frequency and the Emergence of Linguistic Structure. Amsterdam: Benjamins, 309⫺335. Lehmann, Christian 2002 New Reflections on Grammaticalization and Lexicalization. In: Wischer, Ilse/Diewald, Gabriele (eds), New Reflections on Grammaticalization: Proceedings from the International Symposium on Grammaticalization 1999. Amsterdam: Benjamins, 1⫺18. Liddell, Scott K. 2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press. McClave, Evelyn Z. 2001 The Relationship Between Spontaneous Gestures of the Hearing and American Sign Language. In: Gesture 1, 51⫺72. Meir, Irit 2003 Modality and Grammaticalization: The Emergence of a Case-marked Pronoun in ISL. In: Journal of Linguistics 39(1), 109⫺140. Meir, Irit/Sandler, Wendy 2008 A Language in Space: The Story of Israeli Sign Language. New York: Lawrence Erlbaum. Pfau, Roland 2002 Applying Morphosyntactic and Phonological Readjustment Rules in Natural Language Negation. In: Meier, Richard/Cormier, Kearsy/Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 263⫺295. Pfau, Roland/Quer, Josep 2002 V-to-Neg Raising and Negative Concord in Three Sign Languages. Rivista di Grammatica Generativa 27, 73⫺86. Pfau, Roland/Steinbach, Markus 2006 Modality-independent and Modality-specific Aspects of Grammaticalization in Sign Languages. In: Linguistics in Potsdam 24, 3⫺98. Pfau, Roland/Steinbach, Markus 2011 Grammaticalization in Sign Languages. In: Narrog, Heiko/Heine, Bernd (eds.), The Oxford Handbook of Grammaticalization. Oxford: Oxford University Press, 683⫺695. Sallandre, Marie-Anne 2007 Simultaneity in French Sign Language Discourse. In: Vermeerbergen, Myriam/Leeson, Lorraine/Crasborn, Onno (eds.), Simultaneity in Signed Languages: Form and Function. Amsterdam: Benjamins, 103⫺125. Schembri, Adam 2003 Rethinking “Classifiers” in Signed Languages. In: Emmorey, Karen (ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 3⫺34. Senghas, Ann/Kita, Sotaro/Özyürek, Aslı 2004 Children Creating Core Properties of Language: Evidence from an Emerging Sign Language in Nicaragua. In: Science 305, 1779⫺1782.
839
840
VII. Variation and change Shaffer, Barbara 2000 A Syntactic, Pragmatic Analysis of the Expression of Necessity and Possibility in American Sign Language. PhD Dissertation, University of New Mexico, Albuquerque. Shaffer, Barbara 2002 can’t: The Negation of Modal Notions in ASL. In: Sign Language Studies 3(1), 34⫺53. Shaffer, Barbara 2004 Information Ordering and Speaker Subjectivity: Modality in ASL. In: Cognitive Linguistics 15(2), 175⫺195. Stokoe, William C. 2001 Language in Hand: Why Sign Came Before Speech. Washington, DC: Gallaudet University Press. Supalla, Ted 1986 The Classifier System in American Sign Language. In: Craig, Colette (ed.), Noun Classes and Categorization. Amsterdam: Benjamins, 181⫺214. Supalla, Ted/Newport, Elissa 1978 How Many Seats in a Chair? The Derivation of Nouns and Verbs in American Sign Language. In: Siple, Patricia (ed.), Understanding Language through Sign Language Research. New York: Academic Press, 91⫺132. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language: An Introduction. Cambridge: Cambridge University Press. Traugott, Elizabeth Closs 1989 On the Rise of Epistemic Meanings in English. In: Language 65(1), 31⫺55. Traugott, Elizabeth Closs/Dasher, Richard B. 2002 Regularity in Semantic Change. Cambridge: Cambridge University Press. Traugott, Elizabeth Closs/König, Ekkehard 1991 The Semantics-pragmatics of Grammaticalization Revisited. In: Traugott, Elizabeth Closs/Heine, Bernd (eds.), Approaches to Grammaticalization (1). Amsterdam: Benjamins, 189⫺218. Wilcox, Phyllis 1998 give: Acts of Giving in American Sign Language. In: Newman, John (ed.), The Linguistics of Giving. Amsterdam: Benjamins, 175⫺207. Wilcox, Sherman 2004 Cognitive Iconicity: Conceptual Spaces, Meaning, and Gesture in Signed Language. In: Cognitive Linguistics 15(2), 119⫺147. Wilcox, Sherman 2007 Routes from Gesture to Language. In: Pizzuto, Elena/Pietrandrea, Paola/Simone, Raffaele (eds.), Verbal and Signed Languages: Comparing Structures, Constructs and Methodologies. Berlin: Mouton de Gruyter, 107⫺131. Wilcox, Sherman/Shaffer, Barbara 2006 Modality in ASL. In: Frawley, William (ed.), The Expression of Modality. Berlin: Mouton de Gruyter, 207⫺238. Wilcox, Sherman/Wilcox, Phyllis 1995 The Gestural Expression of Modality in ASL. In: Bybee, Joan/Fleischman, Suzanne (eds.), Modality in Grammar and Discourse. Amsterdam: Benjamins, 135⫺162. Wischer, Ilse 2000 Grammaticalization Versus Lexicalization: ‘Methinks’ There Is Some Confusion. In: Fischer, Olga/Rosenbach, Anette/Stein, Dieter (eds.), Pathways of Change: Grammaticalization in English. Amsterdam: Benjamins, 355⫺370. Wylie, Laurence William 1977 Beaux Gestes: A Guide to French Body Talk. Cambridge, MA: The Undergraduate Press.
35. Language contact and borrowing
841
Zeshan, Ulrike 2000 Sign Language in Indo-Pakistan: A Description of a Signed Language. Amsterdam: Benjamins. Zeshan, Ulrike 2003 ‘Classificatory’ Constructions in Indo-Pakistani Sign Language: Grammaticalization and Lexicalization Processes. In: Emmorey, Karen (ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 113⫺141. Zeshan, Ulrike 2004 Hand, Head, and Face: Negative Constructions in Sign Languages. In: Linguistic Typology 8, 1⫺58.
Terry Janzen, Winnipeg (Canada)
35. Language contact and borrowing 1. 2. 3. 4. 5. 6. 7.
Introduction Language contact and the bilingual situation of sign languages Contact between spoken languages and sign languages Contact between sign languages Language attrition and death Conclusion Literature
Abstract This chapter is concerned with contact between sign languages and spoken languages, contact between sign languages, and the outcomes of this contact. Earlier approaches focusing on diglossia and pidginization and more recent studies of bilingualism and modality, including code-switching, code-mixing, and code-blending and their features are reviewed. The consequences of sign language contact with spoken languages will be detailed, including mouthing and fingerspelling, as will be the outcome of contact between sign languages such as lexical borrowing and International Sign. Contact resulting in language attrition and language death will also be briefly discussed.
1. Introduction The focus in this review will be on bilingualism and externally triggered change in sign language as a result of language contact and borrowing. Contact can occur between two sign languages or between a sign language and a spoken language, and both unimodal (two sign languages) and cross-modal (sign language/spoken language) bilingualism can be found in Deaf communities. However, because of the minority language situa-
842
VII. Variation and change tion of most sign languages, contact between spoken and sign languages, and crossmodal bilingualism have been relatively well-researched, with more limited research on contact between sign languages and almost no research on sign language/sign language bilingualism. Using the contrast drawn by Hamers and Blanc (2003) between bilingualism (community-level use of more than one language) and bilinguality (an individual’s use of more than one language), it can be said that Deaf communities exhibit bilingualism, while individuals in Deaf communities exhibit variable degrees of bilinguality in a signed and spoken/written language. Of particular interest in relation to societal crossmodal bilingualism are those communities where there is widespread cross-modal bilingualism among both hearing and deaf people (see Woll/Adam (2012) for a review), but in all Deaf communities there are influences from spoken languages, resulting from code-blending as well as the more familiar code-mixing and code-switching. Borrowing can also be extensive, primarily from the dominant spoken/written language to the sign language, or where two sign languages are in contact, between sign languages. As in all language contact situations, as minority language speakers become more fluent in the majority language, their first language loses linguistic features which are not replaced; when transmission to children is interrupted, the second generation become semi-speakers (Dorian 1982). The final section, therefore, will focus on language shift, including an exploration of language attrition in terms of the individual, and language death in relation to the community.
2. Language contact and the bilingual situation of sign languages Deaf communities form minority language communities within dominant spoken language communities. The effects of language contact in such settings can be seen across a range of linguistic phenomena, including borrowings and loans, interference, convergence, transference, bilingualism, code switching, foreigner talk, language shift, language attrition, language decline, and language death (Thomason 2001). The effects of contact between sign languages and spoken languages parallels contact between spoken languages in similar sociolinguistic contexts. Additionally, sign languages can be in contact with other sign languages, and the same power asymmetries can often be seen in the outcomes of contact. Language contact can result in bilingualism (Grosjean 1982), and as well as bilingualism, contact between languages can result in phonological, lexical, and grammatical change in either or both languages. As Sankoff (2001) notes, languages used by bilinguals may undergo additional changes that are different from those that are found in monolingual communities, as additional factors may drive change. With respect to sign languages, Lucas and Valli (1989, 1991, 1992) report that the major outcomes of language contact, such as lexical influence from one language on the other, foreigner talk, interference (Weinreich 1968), and the creation of pidgins, creoles, and mixed systems, are also found in signed-spoken language contact. Johnston and Schembri (2007) adapt Lucas and Valli’s (1992) model of the different varieties of signing to describe the differences between contact and artificial varieties of signing in relation to the situation of Australian Sign Language (Auslan). In contact signing between Auslan and English,
35. Language contact and borrowing a simplified English word order, reduced use of space and non-manual features, as well as some idiosyncratic patterns are found, whereas in artificially created varieties designed to represent English such as Australasian Signed English, the syntax follows the syntax of English. In some situations, contact results in the creation of pidgins and creoles. Language contact can also result in bilingualism. As Sankoff (2001) notes, languages used by bilinguals may undergo additional changes that are different from those that are found in monolingual communities, as additional factors may drive change. Cross-modal societal bilingualism has been reported in many communities in which Deaf people live. Different types of language contact and social structure in communities such as those, for example, of Martha’s Vineyard, Bali, and Yucatan, are described and contrasted by Woll and Ladd (2003). In most of these communities, there is a high incidence of deafness and a high proportion of hearing people are fluent in both a spoken language and a sign language (see chapter 24, Shared Sign Languages, for further discussion).
3. Contact between spoken languages and sign languages Sign language researchers in the 1970s and 1980s, noting how people’s signing changed in different contexts, drew on the sociolinguistic literature to explain this phenomenon. Having observed influence from English on American Sign Language (ASL) to varying degrees, Stokoe (1969) proposed that this be characterised as a form of diglossia. Classically, diglossia refers to communities where there are High and Low varieties of a single language, used in different settings, for example in Switzerland, where Swiss German (Low) and Standard German (High) are both in use. In such communities, the Low variety is used for everyday communication, while the High variety is used in literature and formal education (Ferguson 1959). Fishman (1967) extended this to address the relationship between diglossia and bilingualism. Woodward (1973) described a ‘deaf diglossic continuum’ to reflect the variable mix of ASL and English found in the American Deaf community, using the term ‘Pidgin Signed English’ to refer to the variety found in the middle of the continuum. Deuchar (1984) applied Woodward’s deaf diglossic continuum to the British Deaf community but contended that this is an oversimplification of the language contact phenomena. Contemporary with Woodward and Stokoe’s work, Tervoort (1973) argued that under the classic definition of diglossia, the High and Low forms had to be varieties of the same spoken language. Since ASL and English were two different languages in contact, it would be more appropriate to describe the Deaf community as a bilingual community. Therefore if diglossia existed, it was not between ASL and English, but rather between ASL and manual varieties of English, sometimes called Manually Coded English (MCE), which seek to represent the grammar of English in manual form. The modality differences between signed and spoken languages also render the diglossia model problematic in this contact situation. Cokely (1983) moved on from the diglossia model and described how interaction between fluent Deaf signers and hearing learners of sign language results in ‘foreigner talk’. Lucas and Valli (1992) proposed the term ‘contact signing’, and this is now generally used to refer to mixes
843
844
VII. Variation and change between a signed and spoken language. The prevailing view nowadays is that the Deaf community is a bilingual community with individual Deaf people having varying degrees of fluency in the signed and spoken languages of the community.
3.1. Pidgins A pidgin is a simplified language which arises from contact between two languages, and which is not a stable variety of language. A creole is formed when a pidgin is nativized, that is, acquired by children as a first language. Creoles often have grammar different from the languages that they are derived from, as well as some evidence of phonological and semantic shift (Hall 1966). Fischer (1978) pointed out a number of linguistic and socioeconomic similarities between pidgin forms resulting from contact between sign language and spoken language and pidgins and creoles resulting from contact between spoken languages (see chapter 36 for further discussion of creolisation). Woodward (1973, 1996) proposed the concept of a Pidgin Signed English which included grammatical structures which were reduced and mixed from ASL and English, along with new structures which did not originate from either ASL or English. Because pidgins are the result of language contact and creoles are learnt as a first language, the age of acquisition and the context of language use can influence whether a Deaf person uses a pidgin form of sign language or a sign language (Mayberry/Fischer/Hatfield 1983). However, there are significant differences between the contexts in which spoken language pidgins arise, and those described for sign language-spoken language contact: for example, the people who mix signed and spoken languages regularly tend to be fluent users of both a signed and spoken language (Johnston/Schembri 2007). Varieties arising spontaneously are now referred to as contact signing (Lucas/Valli 1992), while terms such as Pidgin Signed English and Manually Coded English (Bornstein 1990; Schick 2003) are used for manual representations of spoken English which often use additional signs created to represent English function words.
3.2. Code-switching and code-mixing Of all the possible forms of interference between two languages, code-switching and code-mixing are the most studied (Thomason 2001) and refer to the use of material (including vocabulary and grammar) from more than one language within a conversation. With respect to contact between sign language and spoken language, code-mixing and code-switching are seen as context- and content-dependent (Ann 2001; Kuntze 2000; Lucas/Valli 1992). Code switching occurs inter-sententially (switching at a sentence boundary) while code-mixing occurs intra-sententially. However, Ann (2001) points out that code-switching and code-mixing in sign language-spoken language contact would require a person to stop signing and start speaking or vice versa. This hardly ever occurs in communication between individuals who are bilingual in both a spoken and sign language (Emmorey et al. 2008).
35. Language contact and borrowing
845
3.3. Code-blending Myers-Scotton (1993) proposed the matrix language-frame model which takes into account the languages that play a part in code-switching and code-mixing; the most dominant language in the sentence is the matrix language (ML) while the other language is called the embedded language (EL). Romaine (1995) describes how in intense language contact a third language system may emerge which shows properties not found in either of the input languages. In relation to sign languages, Lucas and Valli (1992) discuss the existence of a third system, which is neither ASL nor English and in which phonological, morphological, syntactic, lexical, and pragmatic features are produced simultaneously. In this system, stretches of discourse cannot be assigned either to ASL or to English, as they combine elements of both languages and also include some idiosyncratic characteristics. This is known as code-blending. Code-blending in sign language-spoken language contact has unique properties because of the different language modalities involved. Because the articulators for spoken languages and sign languages are different, it is possible to use both types of articulators at the same time. This is not only found in contact between spoken languagedominant and sign language-dominant signers, but also between native signers who are also fluent in a spoken language. Emmorey, Borinstein, and Thompson (2005) discuss the presence of ‘code-blending’ in bimodal bilingual interactions. Van den Bogaerde (2000) also found this phenomenon in interactions between deaf adults and hearing children. Emmorey et al. (2008) report that full switches between languages in ASLEnglish bilinguals are exceptional because the different modalities allow for the simultaneous production of elements of both languages. In a study designed to elicit language mixing from hearing native signers, the predominant form of mixing was code-blends (English words and ASL signs produced at the same time). They also found that where ASL was the matrix language, no single-word code-blends were produced. Baker and van den Bogaerde (2008), in an investigation of language choice in Dutch families with Deaf parents and deaf or hearing children, found that code-blending varies, depending on which is the matrix (or base) language. Both the Emmorey et al. study and Baker and van den Bogaerde’s research contrast with Lucas and Valli’s (1992) claim that code-blending is a third system and that there is no matrix language. The examples in (1) to (4) illustrate the various types of code-blending occurring with different matrix languages. In (1), Dutch is the matrix language: the utterance is articulated fully in Dutch, but the verb vallen (‘fall’) is accompanied by the corresponding sign from Sign Language of the Netherlands (NGT). Example (2) shows the reverse pattern; the utterance is expressed in NGT, but the final sign blauw (‘blue’) is accompanied by the corresponding Dutch word (Baker/van den Bogaerde 2008, 7 f.). (1)
Dutch matrix language Signed vallen Spoken die gaat vallen that goes fall ‘That [doll] is going to fall.’
[Dutch/NGT]
846
VII. Variation and change (2)
NGT matrix language Signed index jas blauw Spoken blauw coat blue ‘He has a blue coat.’
Example (3) is different from (1) and (2) in that both spoken and signed elements contribute to the meaning of the utterance; note the Dutch verb doodmaken (‘kill’) accompanying the sign schieten (‘shoot’). Thus the sign specifies the meaning of the verb (Baker/van den Bogaerde 2008, 9). It is also noteworthy that in the spoken utterance, the verb does not occupy the position it would usually occupy in Dutch (the Dutch string would be (De) politie maakt andere mensen dood); rather, it appears sentence-finally, as is common in NGT. Baker and van den Bogaerde (2008) refer to this type as ‘mixed’ because there is no clearly identifiable matrix language. The same is true for (4), but in this example, a full blend, all sentence elements are signed and spoken (van den Bogaerde/Baker 2002, 191). (3)
Mixed (no matrix language) Signed politie ander mensen schieten Spoken politie andere mensen doodmaken police other people shoot/kill ‘The police shot the other people.’
(4)
Full blending of Dutch and NGT Signed boek pakken Spoken boek pakken book fetch ‘I will fetch the book.’
[Dutch/NGT]
Code-blends were found both in mothers’ input to their children and in the children’s output. NGT predominated as the matrix language when Deaf mothers communicated with their deaf children, while spoken Dutch was more often used as the matrix language with hearing children. The hearing children used all four types of code-blending whereas the deaf children tended to use NGT as a matrix language (van den Bogaerde/ Baker 2005). Code-blending took place more often with nouns than with verbs. Bishop and Hicks (2008) investigated bimodal bilingualism in hearing native signers, describing features in their English that are characteristic of sign languages but are not present in English. These hearing native signers also combined features of both ASL and English, illustrating their fluent bilingualism and shared cultural and linguistic background. As mentioned before, Emmorey et al. (2008) found that adult bimodal bilinguals produced code-blending much more frequently than code-switching. Where codeblending occurred, semantically equivalent information was provided in the two languages. They argue that this challenges current psycholinguistic models of bilingualism, because it shows that the language production system does not require just one single lexical representation at the word level. Although there are independent articulators (hands for ASL and mouth for English), two different messages are not produced simultaneously ⫺ in line with Levelt’s
35. Language contact and borrowing (1989, 19) constraints which prevent the production or interpretation of two concurrent propositions. However, there are disagreements about whether mouthing (unvoiced articulation of a spoken word with or without a manual sign; see section 3.4.2 below) is a case of code-blending or whether code-blending only occurs when both English and ASL become highly active (see Vinson et al. 2010). There are many linguistic and social factors that trigger code-blending, with codeblending having the same social and discourse function for bimodal bilinguals that code-switching has for unimodal bilinguals (Emmorey et al. 2008). Triggers previously identified for code-switching include discourse and social functions, such as identity, linguistic proficiency, signaling topic changes, and creating emphasis (Romaine 1995). Nouns generally switch more easily than verbs; however, Emmorey et al. (2008) found that in a single sign code-blend or code-mix, it was more likely that ASL verbs were produced. They explain this by noting that it is possible to articulate an ASL verb and to produce the corresponding English verb with tense inflection at the same time.
3.4. Borrowing from spoken language to sign language Thomason and Kaufman (1988) define ‘borrowing’ as the incorporation of foreign features into a group’s native language. Lexical borrowing generally occurs when speakers in contact with another more dominant language perceive a gap or a need for reference to new or foreign concepts in their first language; the outcome is to expand the lexicon, or to create substitutes for existing words. Battison (1978) is the first major study of lexical borrowing into ASL from English. He describes how fingerspelled words are restructured and borrowed and argues that this restructuring and borrowing is no different from that which occurs between spoken languages. McKee et al. (2007) describe how ‘semantic importation’ of spoken lexical items into sign languages has specific features arising from their modality difference: borrowing generally occurs through mechanisms such as fingerspelling, mouthing, initialized sign formations, and loan translation. Foreign forms that combine structural elements from two languages may be described as hybrids: ‘Māoridom’, which refers to the Māori people, their language, and culture, is an example in New Zealand English, while initialized signs and the co-articulation of a manual sign with a mouthing specifying the meaning of the sign are forms of hybrid loans commonly found in sign languages including New Zealand Sign Language (NZSL). Two social preconditions for borrowing between languages are extended social contact and a degree of bilinguality in speakers (Thomason/Kaufman 1988). In language contact settings, bilingual individuals are instrumental in introducing new usages and coinages from a second language to the community, which are then transmitted to monolingual speakers who would not otherwise have access to them. As for the New Zealand situation, an important factor in contact between Te Reo Māori and NZSL is the emergence of bilingual individuals and of domains where the two languages are in use by Deaf and hearing participants. Māori sign language interpreters, and other hearing Māori with NZSL skills, have in some instances been key agents of motivating, coining, and disseminating contact forms (McKee et al. 2007). Exposure to a second language, resulting in indirect experience of that language, rather than actual bilingual-
847
848
VII. Variation and change ism, can be sufficient to prompt lexical borrowing. This describes the circumstances of Māori Deaf themselves, who have created contact sign forms as a result of indirect exposure to Te Reo Māori, rather than through direct use of it as bilinguals.
3.4.1. Fingerspelling Fingerspelling is the use of a set of manual symbols which represent letters in a written language (Sutton-Spence 1998). There are many different manual alphabets in use around the world, some of which are two-handed (e.g. the system used in the UK) and others which are one-handed (e.g. the systems used in the US and the Netherlands) (Carmel 1982). Fingerspelling is treated differently by different researchers; some consider it as part of the sign language, while others see it as a foreign element coming from outside the core lexicon. Battison’s (1978) study of loan forms from fingerspelling was based on the premise that fingerspelled events were English events. Other researchers, such as Davis (1989), have argued that fingerspelling is not English. Davis goes on to argue that fingerspelling is an ASL phonological event because ASL morphemes are never borrowed from the orthographic English event; they are simply used to represent the orthographic event. Loans from fingerspelling are restructured (Lucas/ Valli 1992, 41) to fit the phonology of the sign language. Sutton-Spence (1994) discusses fingerspellings and single manual letter signs (SMLS) as loans from English, whatever their form or degree of integration into British Sign Language (BSL). The articulatory characteristics of the fingerspelled word, the phonological and orthographic characteristics of the spoken and written word, and the phonological characteristics of the sign language all influence how words are borrowed and in what form. Quinto-Pozos (2007) views fingerspelling as one of the points of contact between a signed and a spoken language, with fingerspelling available as a way of code-mixing. Waters et al. (2007, 1287) investigated the cortical organization of written words, pictures, signs, and fingerspelling, and whether fingerspelling was processed like signing or like writing. They found that fingerspelling was processed in areas in the brain similar to those used for sign language, and distinct from the neural correlates involved in the processing written text. Although the written form of spoken language can be a source for borrowing of vocabulary through fingerspelling, Padden and LeMaster (1985), Akamatsu (1985), and Blumenthal-Kelly (1995) have all found that children recognize fingerspelled words in context long before the acquisition of fingerspelling, and so those fingerspelled words are considered signs. Additionally, lexical items can be created, according to Brentari and Padden (2001) and Sutton-Spence (1994) through the compounding of fingerspelling and signs, for example, fingerspelled -p- C mouth for Portsmouth. The American manual alphabet is one-handed; the British manual alphabet is twohanded. In both sign languages, fingerspelling can be used to create loan signs. However, there is an influence of the use of two hands for fingerspelling on loan formation. In a corpus of 19,450 fingerspelled BSL items, Sutton-Spence (1998) found that very few were verbs and most were nouns. There are various possible reasons, including the influence of word class size on borrowing frequency: nouns make up 60 % and verbs make up 14 % of the vocabulary. However, she also suggests that the difference might be due to phonotactic reasons. In order to add inflection, fingerspelled loan verbs
35. Language contact and borrowing would have to move through space while simultaneously changing handshapes; this, however, would violate phonotactic rules of BSL relating to the movement of two hands in contact with each other. There is a process of nativization of fingerspelling (Kyle/Woll 1985; Sutton-Spence 1994; Cormier/Tyrone/Schembri 2008), whereby a fingerspelled event becomes a sign. This occurs when (i) forms adhere to phonological constraints of the native lexicon, (ii) parameters of the forms occur in the native lexicon, (iii) native elements are added, (iv) non-native elements are reduced (e.g. letters lost), and (v) native elements are integrated with non-native elements (Cormier/Tyrone/Schembri 2008). Brennan, Colville, and Lawson (1984) discuss the borrowing of Irish Sign Language (Irish SL) fingerspelling into BSL by Catholic signers in the west of Scotland. Johnston and Schembri (2007) also mention signs with initialization from the Irish manual alphabet, borrowed into Auslan, although this is no longer a productive process. Initialization is widely seen in sign languages with a one-handed manual alphabet and refers to a process by which a sign’s handshape is replaced by a handshape associated with (the first letter of) a written word. For example, in ASL, signs such as group, class, family, etc., all involve the same circular movement executed by both hands in neutral space, but the handshapes differ and are the corresponding handshapes from the manual alphabet: -g- (@), -c- (:), and -f- (^), respectively. Machabée (1995) noted the presence of initialized signs in Quebec Sign Language (LSQ), which she categorized into two groups: those realized in fingerspelling space or neutral space, accompanied by no movement or only a hand-internal movement, and those which are realized as natural LSQ signs, created on the basis of another existing but non-initialized sign, through a morphological process. Initialized signs are rare in sign languages using a two-handed alphabet; instead SMLS are found. In contrast to initialized signs, SMLS are not based on existing signs; rather, they only consist of the hand configuration representing the first letter of the corresponding English word to which a movement may be added (Sutton-Spence 1994). Loans from ideographic characters are reported, for example in Taiwanese Sign Language (TSL) (Ann 2001, 52). These are either signed in the air or on the signer’s palm. Interestingly, these loans sometimes include phonotactic violations, and handshapes which do not exist in TSL appear in some character loan signs. Parallels may be seen in the ‘aerial fingerspelling’ used by some signers in New Zealand. With aerial fingerspelling, signers trace written letters in the air with their index finger, although this is only used by older people and does not appear in the data which formed the basis of the NZSL dictionary (Dugdale et al. 2003, 494)
3.4.2. Mouthing In the literature, two types of mouth actions co-occurring with manual signs are usually distinguished: (silent) mouthings of spoken language words and mouth gestures, which are unrelated to spoken languages (Boyes-Braem/Sutton-Spence 2001). Mouthing plays a significant role in contact signing (Lucas/Valli 1989; Schermer 1990). There is, however, disagreement about the role of mouthing in sign languages: whether it is a part of sign language or whether it is coincidental to sign language and reflects bilingualism (Boyes-Braem/Sutton-Spence 2001; Vinson et al. 2010).
849
850
VII. Variation and change Schermer (1990) is the earliest study of mouthing, investigating features of the relationship between NGT and spoken Dutch. Her findings indicate that the mouthing of words (called ‘spoken components’ in her study) has two roles: to disambiguate minimal pairs and to specify the meaning of a sign. She found differences between signers, with age of acquisition of a sign language having a strong influence on the amount of mouthing. Schermer described three types of spoken components: (i) complete Dutch lexical items unaccompanied by a manual sign; these are mostly Dutch prepositions, function words, and adverbs, (ii) reduced Dutch lexical items that cannot be identified without the accompanying manual sign, and (iii) complete Dutch lexical items accompanying a sign, which have the dual role of disambiguating and specifying the meaning of signs. She also mentions a fourth group which are both semantically and syntactically redundant. Example (5a) illustrates type (ii); here the mouthing is reduplicated (koko) in order to be synchronized with the repeated movement of the sign koken (‘to cook’). A mouthing of type (iii) is shown in (5b). This example is interesting because the sign koningin (‘queen’) has a double movement and is accompanied by the corresponding Dutch word koningin, which, however, is not articulated in the same way as it would usually be in spoken Dutch; there are three syllables in the Dutch word and the second syllable is less stressed so that the last syllable coincides with the second movement of the sign (Schermer 2001, 276). /koko/
(5)
a. koken ‘to cook’
/koningin/
b. koningin ‘queen’
[NGT]
Sutton-Spence and Woll (1999, 83) and Johnston and Schembri (2007, 185) also refer to mouthing as providing a means of disambiguating between SMLS ⫺ in Auslan, the signs geography and garage, for example, can be disambiguated by mouthing. In another study, Schembri et al. (2002) found that more noun signs had mouthed components than verb signs, and this was also reported for German Sign Language (DGS, Ebbinghaus/Hessmann 2001), Swiss-German Sign Language (SGSL, Boyes-Braem 2001), and Sign Language of the Netherlands (NGT, Schermer 2001). Moreover, Hohenberger and Happ (2001) report differences between signers: some signers used ‘full mouthings’, where strings of signs are accompanied by mouthings, while others used ‘restricted mouthings’, where mouth gestures predominate and signs are only selectively accompanied by mouthings. Bergman and Wallin (2001) suggest that mouthings follow a hierarchical structure similar to other components of spoken and sign languages.
3.4.3. Loan translations and calques Sign languages borrow extensively from spoken languages (Johnston/Schembri 2007) creating calques such as support+group and sports+car. In some cases, a loan translation in BSL such as break+down exists alongside a native sign breakdown. Brentari and Padden (2001) discuss ASL examples such as dead+line and time+line. Calques
35. Language contact and borrowing can include semantically incongruous but widely used forms such as baby+sit. Loan translations can also include compounds composed of a native sign and a fingerspelled form such as deadC-e-n-d-.
3.5. Borrowing from the gestures of hearing communities Gesture is universally used within hearing communities. Co-speech gestures include deictic gestures (pointing), referential gestures (iconically motivated gestures), and emblems (gestures highly conventionalised within a community (Kendon 2004); see chapter 27 for further discussion), and all of these often find their way into sign languages, becoming linguistic elements in the process (see chapter 34, Lexicalisation and Grammaticalisation). Elements borrowed into sign languages include both manual and nonmanual gestures. One major group consists of manual emblems, that is, conventional, culture-specific gestures. Manual emblems can be lexicalized (e.g. good (thumb up) in BSL and many other sign languages; hungry in Italian Sign Language (LIS); yummy in NGT). Deictic manual gestures such as points can be grammaticalized (for example, in pronominal forms); and non-manual gestures may become markers with a linguistic function. Pyers and Emmorey (2008), for instance, found that non-linguistic facial expressions such as brow movement, commonly used by hearing people in questions and “if-then” sentences, appear with linguistic function in sign languages. Antzakas (2006) suggests that the backwards head tilt used by hearing communities in the eastern Mediterranean area as a gesture meaning ‘No’ ⫺ contrasting with the headshake gesture used in Northern Europe (Antzakas/Woll 2002) ⫺ has been borrowed and grammaticalized as a negation marker in Greek Sign Language (GSL). Cultural factors in the relationship between the gestures of the hearing community and the signs of the Deaf community were explored by Boyes-Braem, Pizzuto, and Volterra (2002). They found that Italian non-signers were better able than signers and non-signers from other countries at guessing the meanings of signs rooted in Italian culture, that is manual forms which also occurred as referential gestures and emblems in Italian co-speech gesture (also see Pizzuto/Volterra 2000).
4. Contact between sign languages Contact between two sign languages results in similar phenomena to those that occur when two spoken languages come into contact, particularly with respect to interference and code-switching (Lucas/Valli 1989, 1992; Quinto-Pozos 2008). Many of the processes discussed above relating to the outcomes of contact between a sign language and a spoken language are also found in sign language to sign language contact. However, there are also some interesting differences. Code-switching (Quinto-Pozos 2008, 2009) between sign and spoken language raises the issue of modality differences (see section 3.3), as opposed to code switching between two sign languages. To date, only few studies have focussed on contact between two sign languages. This may be due to the fact that in order to investigate contact between two sign languages, a detailed description of each of the sign languages is necessary, that is,
851
852
VII. Variation and change a description of their individual phonetic, phonological, morphological, and syntactic structures as well as the extent to which these differ between the two languages. However, a few studies of borrowing between two sign languages exist. Meir and Sandler (2008), for instance, note how signs in Israeli Sign Language (Israeli SL) are borrowed from other sign languages, brought by immigrants (e.g. from Germany and Russia). Using Muysken’s (2000) typology, Adam (2012) examined the contact between dialects of BSL (including Auslan) and dialects of Irish SL (including Australian Irish Sign Language) and found examples of all three types of code-mixing in this typology although congruent lexicalisation was the most common form of code-mixing. Valli and Lucas (2000) discuss how contact between two sign languages can not only result in lexical borrowing, but also code-switching, foreigner talk, interference as well as pidgins, creoles, and mixed systems.
4.1. International Sign Deaf people in the Western and Middle Eastern world have gathered together using sign language for at least 2,000 years (Woll/Ladd 2003). The international Deaf community is highly mobile and in the 21st century there are regular international events, including the World Federation of the Deaf Congresses, Deaflympics Games, and other international and regional events. Cross-national signed communication was first reported in the early 19th century (Laffon de Ladebat 1815; Murray 2009). Laffon de Ladébat describes the meeting of Laurent Clerc with the deaf children at the Braidwood school in London: As soon as Clerc beheld this sight [the children at dinner] his face became animated; he was as agitated as a traveller of sensibility would be on meeting all of a sudden in distant regions, a colony of his own countrymen. […] Clerc approached them. He made signs and they answered him by signs. This unexpected communication caused a most delicious sensation in them and for us was a scene of expression and sensibility that gave us the most heartfelt satisfaction. (Laffon de Ladébat 1815, 33)
This type of contact was not uncommon within Europe. The Paris banquets for deafmutes (sic) in the 19th century are another example of the coming together of Deaf people in a transnational context: There were always foreign deaf-mutes in attendance, right from the first banquet. At the third, there were deaf-mutes from Italy, England, and Germany. […] It seems that many of these foreign visitors […] were painters drawn to Paris to learn or to perfect their art, and even to stay on as residents. Several decades later, deaf American artists […] and the painter J. A. Terry (father of the Argentinean deaf movement) probably all participated in the banquets. (Mottez 1993, 32) Deaf-mute foreigners, in their toasts, never missed a chance to emphasize the universal nature of signs, claiming that “it easily wins out over all the separate limiting languages of speaking humanity, packed into a more or less limited territory. Our language encompasses all nations, the entire globe.” (Mottez 1993, 36)
Such cross-linguistic communication can be regarded as a pidgin. In a sign pidgin, Deaf people from different communities communicate by exploiting their awareness
35. Language contact and borrowing of iconicity and their access to visual-spatial expression. Such pidgins, however, cannot easily be used to convey complex meanings, especially to Deaf people who have had little exposure to or practice with cross-linguistic communication. The description of Clerc at the deaf school suggests a situational pidgin created between a Deaf adult using French Sign Language (LSF) and BSL-using Deaf children. In the case of the Paris banquets, it is not known whether a situational pidgin was used or whether, due to the length of stay of the banqueters, LSF was the language of interaction. Most of what is known about pidgins is based on language contact with spoken languages (Supalla/Webb 1995), and there has been relatively little research on the linguistic outcome of contact between sign languages. However, there has been some research on International Sign (IS), a contact variety which results from contact between sign languages. Use of the term International Sign, rather than International Sign Language, emphasises that IS is not recognised as having full linguistic status. Although used for communication across language boundaries, it is not comparable to Esperanto in that it is not a planned language with a fixed lexicon and a fixed set of grammatical rules. In the 1970s, Gestuno: International Sign Language of the Deaf was an attempt by the World Federation of the Deaf to create a standardised artificial international sign language, but this attempt was not successful (Murray 2009). IS is a pidgin with no native signers or extended continuous usage (Moody 1994; Supalla/Webb 1995). However, the structure of IS is much more complex than that usually found in pidgins. In their study on the grammar of IS, Supalla and Webb report finding SVO word order, five types of negation, and verb agreement, all used with consistency and structural regularity (Supalla/Webb 1995, 348). This complexity is most likely the result of the similarity of the grammatical and morphological structures of the sign languages in contact ⫺ to the extent that IS has been considered a koine or universal dialect. However, as Supalla and Webb also point out, studies of IS have largely been concerned with contact among European sign languages (including ASL, which is of European origin) and this may provide a misleading picture. Unlike sign languages, IS does not have its own lexicon (Allsop/Woll/Brauti 1995). Signers therefore have to decide whether to use signs from their own language, or from another sign language, or whether to use mime, gesture, referents in the environment, or one of the few signs recognised as conventional in IS. Consequently, signers of IS often chain together strings of signs and gestures to represent a single referent. Thus signers of IS combine a relatively rich and structured grammar with a severely impoverished lexicon (Allsop/Woll/Brauti 1995). This pattern is very different from that found in spoken language pidgins, where the grammar is relatively more impoverished than the lexicon. Allsop, Woll, and Brauti also found that IS texts were longer in duration and slower in production. This has implications for those seeking to provide interpretation in IS at international meetings (McKee/Napier 2002; for issues in sign language interpreting, also see chapter 36). In fact, IS shares many features with foreigner talk: it incorporates the same types of language modification native signers use when interacting with non-native signers, such as slower rate of production, louder speech (or in the case of sign languages, larger signs), longer pauses, common vocabulary, few idioms, greater use of gesture, more repetition, more summaries of preceding utterances, shorter utterances, and more deliberate articulation (Alatis 1990, 195). The increasing mobility of deaf people within some transnational regions (e.g. Europe) has resulted in greater opportunities for contact with Deaf people from other
853
854
VII. Variation and change countries within those regions, greater knowledge of the lexicons of other sign languages, and more frequent use of IS strategies. The effectiveness of IS is undoubtedly enhanced by the historical relationships that many European sign languages have with each other. It is unknown how effective IS is for signers from Asia and Africa or for users of village sign languages. IS is, however, an effective mode of communication for many Deaf people in transnational contexts and has been used as a ‘lingua franca’ at international events such as the Deaflympics since their beginning with the first ‘Silent Games’ in 1924, in which nine European countries took part. IS is also used by the World Federation of the Deaf (WFD), a global lobbying organisation of Deaf communities, where interpretation into IS has been provided since 1977 (Scott-Gibson/Ojala 1994). When two Deaf individuals meet, with similar experiences of interacting gesturally with non-signers and with experience of using a language in the visual modality, a situational pidgin can be created effectively. The more experience signers have in communicating with users of other sign languages, the greater their exposure to different visually-motivated lexicons will be. This in turn will result in an increased number of strategies and resources to create a situational pidgin. Strings of actions and descriptions are presented from an experiential perspective for interlocutors to understand context-specific meanings. This communication also heavily relies on the inferential processes of the receiver to understand semantic narrowing or broadening.
4.2. Education and colonisation The travels of Deaf people are not the only form of transnational contact within the Deaf community. The history of deaf education and of the training of teachers of the deaf is often linked with sign language contact. As McCagg (1993) notes, teachers of the deaf for the Habsburg empire were trained in Germany. In Ireland, the education system and Irish SL were originally influenced by BSL and later by LSF when Deaf nuns came from France to establish a school for the deaf in Dublin (Burns 1998; Woll/ Elton/Sutton-Spence 2001). All three of these languages ⫺ BSL, Irish SL, and LSF ⫺ have influenced or been the progenitors of other sign languages. These influences have spread around the world from Europe to the Americas and to the Antipodes. The colonial influence on sign languages via educational establishments has in all likelihood influenced IS. European sign languages were brought to many countries across the globe. LSF, for instance, has had a profound influence on many sign languages, including ASL (Lane 1984) and Russian Sign Language (Mathur/Rathmann 1998), and its footprint spreads across central Asia and transCaucasia in the area of the old Soviet empire (Ojala-Signell/Komarova 2006). Other colonial powers in Europe influenced the education systems of the Americas (such as the influences of LIS and Spanish Sign Language (LSE) on the sign language in Argentina), and DGS has had an influence on Israeli SL as a result of post-war immigration (Namir et al. 1979). Moreover, Irish SL and ASL have been brought to many countries in the southern hemisphere through education and religious missionary work (e.g. use of ASL in deaf education in Ghana). As well as lexical influences, European sign languages may also influence the types of linguistic structures that we see in IS, including the metaphoric use of space (for example, timelines).
35. Language contact and borrowing
5. Language attrition and death All languages may undergo attrition and death. For many sign languages, death has been and continues to be likely, given the status of sign languages around the world, the history of oppression of Deaf communities, and technological advances (including cochlear implants and genetic screening; Arnos 2002). Brenzinger and Dimmendaal (1992) note that language death is always accompanied by language shift, which occurs when a language community stops using one language and shifts to using another language, although language shift does not always result in language death. Language death is influenced by two aspects: (i) the environment, consisting of political, historical, economic, and linguistic realities; (ii) the community with its patterns of language use, attitudes, and strategies. Brenzinger and Dimmendaal (1992) observe that every case of language death is embedded in a bilingual situation, which involves two languages, one of which is dying and one of which continues. Sign languages are always under threat from the dominant spoken language community, particularly in relation to education and intervention for deaf children, contexts in which over a long period of time, sign language has not been seen to have a place. Besides direct pressures to abandon bilingualism in a sign language and spoken language, in some countries communities have shifted from using one sign language to another. For example, ASL has replaced indigenous sign languages in some African, Asian, and Caribbean countries (Schmaling 2001). There is a limited literature on sign language attrition. Yoel (2007) identified a set of linguistic changes in a study of Russian Sign Language (RSL) users who had immigrated to Israel. She found that all parameters of a sign underwent phonological interference. Errors made by the signers she studied were mainly miscues and temporary production errors, which are explained as language interference between RSL and Israeli SL. These changes can be seen as precursors of language attrition. In a study of Maritime Sign Language in Canada, a sign language historically descended from BSL, Yoel (2009) found that as a result of language contact with ASL, a shift of language use had taken place, and that Maritime Sign Language is now moribund with few and only elderly users.
6. Conclusion The political and historical aspects of language use and their influence cannot be separated from studies of languages in contact. In contact with spoken languages, the favouring of communication other than a sign language and the view that sign language is not appropriate for some situations are the direct results of a sociolinguistic situation in which sign languages have been ignored and devalued, and in which the focus has traditionally been on the instruction and use of spoken languages. It is only if sign languages become more highly valued, formally and fully recognised, and used in a
855
856
VII. Variation and change wide range of contexts of communication, that the outcomes of language contact in the Deaf community will change. The impact of contact with another language on a sign language also needs to be addressed in terms of modality: cross-modal contact involving contact between a sign language and a spoken language versus unimodal contact between two sign languages. There is a larger body of research into the first type of contact, with new understandings beginning to emerge. From earlier explorations of diglossia and pidginization, researchers have moved towards the study of bimodal language contact and codeblending, as well as other features of cross-modal language contact. With respect to contact between two sign languages, further research is needed to fully understand whether it parallels contact between two spoken languages, exhibiting features such as code-switching, borrowing, language transfer, and interference. This new area of research will contribute both to sociolinguistic theory and language processing research.
7. Literature Adam, Robert 2012 Unimodal Bilingualism in the Deaf Community: Contact Between Dialects of BSL and ISL in Australia and the United Kingdom. PhD Dissertation, University College London. Akamatsu, C. Tane 1985 Fingerspelling Formulae: A Word is More or Less than the Sum of Its Letters. In: Stokoe, William/Volterra, Virginia (eds.), Sign Language Research ’83. Silver Spring, MD: Linstok Press, 126⫺132. Alatis, James E. 1990 Linguistics, Language Teaching, and Language Acquisition: The Interdependence. In: Georgetown University Round Table on Language and Linguistics (GURT) 1990. Washington, DC: Georgetown University Press. Allsop, Lorna/Woll, Bencie/Brauti, Jon-Martin 1995 International Sign: The Creation of an International Deaf Community and Sign Language. In: Bos, Heleen/Schermer, Trude (eds.), Sign Language Research 1994. Hamburg: Signum, 171⫺188. Ann, Jean 2001 Bilingualism and Language Contact. In: Lucas, Ceil (ed.), The Sociolinguistics of Sign Languages. New York: Cambridge University Press, 33⫺60. Antzakas, Klimis 2006 The Use of Negative Head Movements in Greek Sign Language. In: Zeshan, Ulrike (ed.), Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press, 258⫺269. Antzakas, Klimis/Woll, Bencie 2002 Head Movements and Negation in Greek Sign Language. In: Wachsmuth, Ipke/Sowa, Timo (eds.), Gesture and Sign Language in Human-computer Interaction Berlin: Springer, 193⫺196. Arnos, Kathleen S. 2002 Genetics and Deafness: Impacts on the Deaf Community. In: Sign Language Studies 2(2), 150⫺168.
35. Language contact and borrowing Baker, Anne/Bogaerde, Beppie van den 2008 Code-mixing in Signs and Words in Input to and Output from Children. In: Plaza-Pust, Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism. Amsterdam: Benjamins, 1⫺27. Battison, Robin 1978 Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press. Bergman, Brita/Wallin, Lars 2001 A Preliminary Analysis of Visual Mouth Segments in Swedish Sign Language. In: Boyes-Braem, Penny/Sutton-Spence, Rachel (eds.), The Hands are the Head of the Mouth: The Mouth as Articulator in Sign Languages. Hamburg: Signum, 51⫺68. Bishop, Michele/Hicks, Sherry 2008 Coda Talk: Bimodal Discourse Among Hearing, Native Signers. In: Bishop, Michele/ Hicks, Sherry (eds.), Hearing, Mother Father Deaf: Hearing People in Deaf Families. Washington, DC: Gallaudet University Press, 54⫺98. Blumenthal-Kelly, Arlene 1995 Fingerspelling Interaction: A Set of Deaf Parents and Their Deaf Daughter. In: Lucas, Ceil (ed.), Sociolinguistics in Deaf Communities. Washington, DC: Gallaudet University Press, 62⫺73. Bogaerde, Beppie van den 2000 Input and Interaction in Deaf Families. PhD Dissertation, University of Amsterdam. Utrecht: LOT. Bogaerde, Beppie van den/Baker, Anne 2002 Are Young Deaf Children Bilingual? In: Morgan, Gary/Woll, Bencie (eds.), Directions in Sign Language Acquisition, Amsterdam: Benjamins, 183⫺206. Bogaerde, Beppie van den/Baker, Anne 2005 Code Mixing in Mother-Child Interaction in Deaf Families. In: Baker, Anne/Woll, Bencie (eds.), Sign Language Acquisition. Amsterdam: Benjamins, 141⫺163. Bornstein, Harry (ed.) 1990 Manual Communication: Implications for Education. Washington, DC: Gallaudet University Press. Boyes-Braem, Penny/Pizzuto, Elena/Volterra, Virginia 2002 The Interpretation of Signs by (Hearing and Deaf) Members of Different Cultures. In: Schulmeister Ralf/Reinitzer, Heimo (eds.), Progress in Sign Language Research. In Honor of Siegmund Prillwitz. Hamburg: Signum, 187⫺219. Boyes-Braem, Penny/Sutton-Spence, Rachel (eds.) 2001 The Hands are the Head of the Mouth: The Mouth as Articulator in Sign Languages. Hamburg: Signum. Brennan, Mary/Colville, Martin/Lawson, Lilian 1984 Words in Hand: A Structural Analysis of the Signs of British Sign Language. Edinburgh: Edinburgh British Sign Language Research Project, Moray House College of Education. Brentari, Diane/Padden, Carol 2001 Native and Foreign Vocabulary in American Sign Language: A Lexicon with Multiple Origins. In: Brentari, Diane (ed.), Foreign Vocabulary in Sign Languages: A CrossLinguistic Investigation of Word Formation. Mahwah, NJ: Lawrence Erlbaum, 87⫺119. Brenzinger, Matthias/Dimmendaal, Gerrit 1992 Social Contexts of Language Death. In: Brenzinger, Matthias (ed.), Language Death. Factual and Theoretical Explorations with Special Reference to East Africa. Berlin: Mouton de Gruyter, 3⫺6. Burns, Sarah E. 1998 Irish Sign Language: Ireland’s Second Minority Language. In: Lucas, Ceil (ed.), Pinky Extension and Eye Gaze: Language Use in Deaf Communities. Washington, DC: Gallaudet University Press, 233⫺274.
857
858
VII. Variation and change Carmel, Simon J. 1982 International Hand Alphabet Charts. Silver Spring, MD: National Association of the Deaf. Cokely, Dennis 1983 When Is a Pidgin Not a Pidgin? In: Sign Language Studies 38, 1⫺24. Cormier, Kearsy/Tyrone, Martha/Schembri, Adam 2008 One Hand or Two? Nativisation of Fingerspelling in ASL and BANZSL. In: Sign Language and Linguistics 11, 3⫺44. Davis, Jeffrey 1989 Distinguishing Language Contact Phenomena in ASL Interpretation. In: Lucas, Ceil (ed.), The Sociolinguistics of the Deaf Community. San Diego, CA: Academic Press, 85⫺102. Deuchar, Margaret 1977 Sign Language Diglossia in a British Deaf Community. Sign Language Studies 17, 347⫺356. Dorian, Nancy 1982 Defining the Speech Community in Terms of Its Working Margins. In: Romaine, Suzanne (ed.), Sociolinguistic Variation in Speech Communities. London: Edward Arnold, 25⫺33. Dugdale, Patricia/Kennedy, Graeme/McKee, David/McKee, Rachel 2003 Aerial Spelling and NZSL: A Response to Forman (2003). In: Journal of Deaf Studies and Deaf Education 8, 494⫺497. Emmorey, Karen/Borinstein, Helsa/Thompson, Robin 2005 Bimodal Bilingualism: Code-blending between Spoken English and American Sign Language. In: Cohen, James/McAlister, Tara/Rolstad, Kellie/MacSwan, Jeff (eds.), ISB4: Proceedings of the 4 th International Symposium on Bilingualism. Somerville, MA: Cascadilla Press, 663⫺673. Emmorey, Karen/Borinstein, Helsa/Thompson, Robin/Gollan, Tamar 2008 Bimodal Bilingualism. In: Bilingualism: Language and Cognition 11, 43⫺61. Ferguson, Charles A. 1959 Diglossia. In: Word 15, 325⫺340. Fischer, Susan 1978 Sign Language and Creoles. In: Siple, Patricia (ed.), Understanding Language through Sign Language Research. New York: Academic Press, 309⫺331. Fischer, Susan 1996 By the Numbers: Language-internal Evidence for Creolization. In: International Review of Sign Linguistics 1, 1⫺22. Fishman, Joshua 1967 Bilingualism with and Without Diglossia; Diglossia with and Without Bilingualism. In: Journal of Social Issues 32(2), 29⫺38. Grosjean, François 1982 Life with Two Languages: An Introduction to Bilingualism. Cambridge, MA: Harvard University Press. Hall, Robert A. 1966 Pidgin and Creole Languages. Ithaca, NY: Cornell University. Hamers, Josiane/Blanc, Michel 2003 Bilinguality and Bilingualism. Cambridge: Cambridge University Press. Hohenberger, Annette/Happ, Daniela 2001 The Linguistic Primacy of Signs and Mouth Gestures Over Mouthing: Evidence from Language Production in German Sign Language (DGS). In: Boyes-Braem, Penny/Sutton-Spence, Rachel (eds.), The Hands Are the Head of the Mouth: The Mouth as Articulator in Sign Languages. Hamburg: Signum, 153⫺189.
35. Language contact and borrowing Johnston, Trevor/Schembri, Adam 2007 Australian Sign Language: An Introduction to Sign Language Linguistics. Cambridge: Cambridge University Press. Kendon, Adam 2004 Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. Kuntze, Marlon 2000 Codeswitching in ASL and Written English Contact. In: Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 287⫺302. Kyle, Jim/Woll, Bencie 1985 Sign Language: the Study of Deaf People and their Language. Cambridge: Cambridge University Press. Laffon de Ladébat, Andre-Daniel 1815 Recueil des Définitions et Réponses les plus Remarquables de Massieu et Clerc, SourdsMuets, aux Diverses Questions qui leur ont été Faites dans les Séances Publiques de M. l’Abbé Sicard à Londres [A Collection of the Most Remarkable Definitions and Answers of Massieu and Clerc]. London: Cox and Baylis. Lane, Harlan 1984 When the Mind Hears. New York: Random House. Levelt, Willem J. M. 1989 Speaking: From Intention to Articulation. Cambridge, MA: MIT Press. Lucas, Ceil/Valli, Clayton 1989 Language Contact in the American Deaf Community. In: Lucas, Ceil (ed.), The Sociolinguistics of the Deaf Community. San Diego: Academic Press, 11⫺40. Lucas, Ceil/Valli, Clayton 1991 ASL or Contact Signing: Issues of Judgment. In: Language in Society 20, 201⫺216. Lucas, Ceil/Valli, Clayton 1992 Language Contact in the American Deaf Community. San Diego, CA: Academic Press. Machabe´e, Dominique 1995 Description and Status of Initialized Signs in Quebec Sign Language. In: Lucas, Ceil (ed.), Sociolinguistics in Deaf Communities. Washington, DC: Gallaudet University Press, 29⫺61. Mathur, Gaurav/Rathmann, Christian 1998 Why not “give-us”: an Articulatory Constraint in Signed Languages. In: Dively, Valerie/ Metzger, Melanie/Taub, Sarah/Baer, Anne-Marie (eds.), Signed Languages: Discoveries from International Research. Washington, DC: Gallaudet University Press, 1⫺25. Mayberry, Rachel/Fischer, Susan/Hatfield, Nancy 1983 Sentence Repetition in American Sign Language. In: Kyle, Jim/Woll, Bencie (eds.), Language in Sign: An International Perspective. London: Croom Helm, 206⫺215. McCagg, William 1993 Some Problems in the History of Deaf Hungarians. In: Vickery van Cleve, John (ed.), Deaf History Unveiled. Washington, DC: Gallaudet University Press, 252⫺271. McKee, Rachel/McKee, David/Smiler, Kirsten/Pointon, Karen 2007 Maori Signs: The Construction of Indigenous Deaf Identity in New Zealand Sign Language. In: Quinto-Pozos, David (ed.), Sign Languages in Contact. Washington, DC: Gallaudet University Press, 31⫺81. McKee, Rachel/Napier, Jemina 2002 Interpreting into International Sign Pidgin: An Analysis. In: Sign Language & Linguistics 5, 27⫺54. Meir, Irit/Sandler, Wendy 2008 A Language in Space: The Story of Israeli Sign Language. New York: Lawrence Erlbaum.
859
860
VII. Variation and change Mottez, Bernard 1993 The Deaf Mute Banquets and the Birth of the Deaf Movement. In: Fischer, Renate/ Lane, Harlan (eds.), Looking Back: A Reader on the History of Deaf Communities and Their Sign Languages. Hamburg: Signum, 143⫺156. Murray, Joseph 2009 Sign Languages. In: Iriye, Akira/Saunier, Pierre-Yves (eds.), The Palgrave Dictionary of Transnational History. Basingstoke: Palgrave Macmillian, 947⫺948. Muysken, Pieter 2000 Bilingual Speech. A Typology of Code-Mixing. Cambridge: Cambridge University Press. Myers-Scotton, Carol 1993 Duelling Languages: Grammatical Structure in Codeswitching. Oxford: Oxford University Press. Namir, Lila/Sela, Israel/Rimor, Mordecai/Schlesinger, Israel M. 1979 Dictionary of Sign Language of the Deaf in Israel. Jerusalem: Ministry of Social Welfare. Ojala-Signell, Raili/Komarova, Anna 2006 International Development Cooperation Work with Sign Language Intepreters. In: McKee, Rachel (ed.), Proceedings of the Inaugural Conference of the World Association of Sign Language Interpreters, Worcester, South Africa, 31 October⫺2 November 2005. Coleford, Gloucestershire: Douglas McLean, 115⫺122. Padden, Carol/LeMaster, Barbara 1985 An Alphabet on Hand: The Acquisition of Fingerspelling in Deaf Children. In: Sign Language Studies 47, 161⫺172. Pizzuto, Elena/Volterra, Virginia 2000 Iconicity and Transparency in Sign Languages: A Cross-Linguistic Cross-Cultural View. In: Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 261⫺286. Pyers, Jennie/Emmorey, Karen 2008 The Face of Bimodal Bilingualism: Grammatical Markers in American Sign Language are Produced When Bilinguals Speak to English Monolinguals. In: Psychological Science 19(6), 531⫺536. Quinto-Pozos, David 2007 Outlining Considerations for the Study of Sign Language Contact. In: Quinto-Pozos, David (ed.), Sign Languages in Contact. Washington, DC: Gallaudet University Press, 1⫺28. Quinto-Pozos, David 2008 Sign Language Contact and Interference: ASL and LSM. In: Language in Society 37, 161⫺189. Quinto-Pozos, David 2009 Code-Switching Between Sign Languages. In: Bullock, Barbara/Toribio, Jacqueline (eds.), The Handbook of Code-Switching. Cambridge: Cambridge University Press, 221⫺237. Romaine, Suzanne 1995 Bilingualism. Oxford: Blackwell. Sankoff, Gillian 2001 The Linguistic Outcome of Language Contact. In: Trudgill, Peter/Chambers, Jack/ Schilling-Estes, Natalie (eds.), The Handbook of Sociolinguistics. Oxford: Blackwell, 638⫺ 668. Schembri, Adam/Wigglesworth, Gillian/Johnston, Trevor/Leigh, Greg/Adam, Robert/Barker, Roz 2002 Issues in the Development of the Test Battery for Australian Sign Language Morphology and Syntax. In: Journal of Deaf Studies and Deaf Education 7, 18⫺40.
35. Language contact and borrowing Schermer, Trude 1990 In Search of a Language. Delft: Eburon. Schermer, Trude 2001 The Role of Mouthings in Sign Language of the Netherlands: Some Implications for the Production of Sign Language Dictionaries. In: Boyes Braem, Penny/Sutton-Spence, Rachel (eds.), The Hands are the Head of the Mouth: The Mouth as Articulator in Sign Languages. Hamburg: Signum, 273⫺284. Schick, Brenda 2003 The Development of ASL and Manually-Coded English Systems. In: Marschark, Marc/ Spencer, Patricia (eds.), Oxford Handbook of Deaf Studies, Language, and Education. New York: Oxford University Press, 219⫺231. Schmaling, Constanze 2001 ASL in Northern Nigeria: Will Hausa Sign Language Survive? In: Dively, Valerie/Metzger, Melanie/Taub, Sarah/Baer, Anne-Marie (eds.), Signed Languages: Discoveries from International Research. Washington, DC: Gallaudet University Press, 180⫺196. Scott-Gibson, Elizabeth/Ojala, Rail 1994 International Sign Interpreting. Paper Presented at the Fourth East and South African Sign Language Seminar, Uganda. Stokoe, William 1969 Sign Language Diglossia. In: Studies in Linguistics 21, 27⫺41. Supalla, Ted/Webb, Rebecca 1995 The Grammar of International Sign: A New Look at Pidgin Languages. In: Emmorey, Karen/Reilly, Judy (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 333⫺352. Sutton-Spence, Rachel 1994 The Role of the Manual Alphabet and Fingerspelling in British Sign Language, PhD Dissertation, University of Bristol. Sutton-Spence, Rachel 1998 Grammatical Constraints on Fingerspelled English Verb Loans in BSL. In: Lucas, Ceil (ed.), Pinky Extension and Eye Gaze: Language Use in Deaf Communities. Washington, DC: Gallaudet University Press, 41⫺58. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language: an Introduction. Cambridge: Cambridge University Press. Tervoort, Bernard 1973 Could There Be a Human Sign Language? In: Semiotica 9, 347⫺382. Thomason, Sarah 2001 Language Contact: an Introduction. Washington, DC: Georgetown University Press. Thomason, Sarah/Kaufman,Terrence 1988 Language Contact, Creolization, and Genetic Linguistics. Berkeley, CA: University of California Press. Valli, Clayton/Lucas, Ceil 2000 Linguistics of American Sign Language. Washington, DC: Gallaudet University Press. Vinson, David/Thompson, Robin/Skinner, Robert/Fox, Neil/Vigliocco, Gabriella 2010 The Hands and Mouth Do Not Always Slip Together in British Sign Language: Dissociating Articulatory Channels in the Lexicon. In: Psychological Science 21(8), 1158⫺1167. Waters, Dafydd/Campbell, Ruth/Capek, Cheryl/Woll, Bencie/David, Anthony/McGuire, Philip/ Brammer, Michael/MacSweeney, Mairead 2007 Fingerspelling, Signed Language, Text and Picture Processing in Deaf Native Signers: The Role of the Mid-Fusiform Gyrus. In: Neuroimage 35, 1287⫺1302. Weinreich, Uriel 1968 Languages in Contact: Findings and Problems. The Hague: Mouton.
861
862
VII. Variation and change Woll, Bencie/Adam, Robert 2012 Sign Language and the Politics of Deafness. In: Martin-Jones, Marilyn/Blackledge, Adrian/Creese, Angela (eds.), The Routledge Handbook of Multilingualism. London: Routledge, 100⫺116. Woll, Bencie/Elton, Frances/Sutton-Spence, Rachel 2001 Multilingualism: The Global Approach to Sign Languages. In: Lucas, Ceil (ed.), The Sociolinguistics of Sign Languages. Cambridge: Cambridge University Press, 8⫺32. Woll, Bencie/Ladd, Paddy 2003 Deaf Communities. In: Marschark, Marc/Spencer, Patricia (eds.), The Handbook of Deaf Studies, Language and Education. Oxford: Oxford University Press, 151⫺163. Woodward, James 1973 Some Characteristics of Pidgin Sign English. In: Sign Language Studies 3, 39⫺46. Yoel, Judith 2007 Evidence for First-language Attrition of Russian Sign Language Among Immigrants to Israel. In: Quinto-Pozos, David (ed.), Sign Languages in Contact. Washington DC: Gallaudet University Press, 153⫺191. Yoel, Judith 2009 Canada’s Maritime Sign Language. PhD Dissertation, University of Manitoba.
Robert Adam, London (United Kingdom)
36. Language emergence and creolisation 1. 2. 3. 4. 5. 6. 7. 8.
Introduction Creolisation: state of the art Emerging sign languages Structural similarities between creole and sign languages Similarities in acquisition conditions Creolisation and recreolisation revisited Conclusion Literature
Abstract It has been argued that there are numerous interesting similarities between sign and creole languages. Traditionally, the term ‘creolisation’ has been used to refer to the development of a pidgin into a creole language. In this chapter, I take creolisation to apply when children create a new language because they do not have access to a conventional language model during acquisition. In this light, creolisation equals nativisation. Sign and creole languages can be compared to each other because of certain structural similarities as well as similarities in acquisition conditions. Crucial to the discussion here is the role of children in language acquisition when there is no conventional language model.
36. Language emergence and creolisation
1. Introduction In the recent past, there has been renewed interest in the phenomenon of creolisation in sign language circles (Kegl 2002; Aronoff/Meir/Sandler 2005). The term ‘creolisation’ has been traditionally used in the field of creole studies to refer to the development of a pidgin into a creole language (Todd 1990; Bickerton 1977, 1981; Andersen 1983; Hymes 1971), a definition which implies that as soon as a pidgin functions as a first language for its speakers, it has become a creole. This link between pidgin and creole, especially the question whether a creole always develops from a pidgin, has been one of the central issues in the field of creole studies for decades. Several proposals were put forward to account for the emergence of creole languages. Mühlhäusler (1986) and Bickerton (1981), among others, proposed different scenarios for the genesis of creole languages. What these proposals had in common is that they analysed creolisation from a sociolinguistic perspective. Within the field of sign linguistics, several researchers have pointed out a number of interesting similarities between sign and creole languages. Early comparisons between them were based on studies investigating American Sign Language (ASL) (Fischer 1978; Woodward 1978; Gee/Goodhart 1988). These scholars have argued that sign languages have creole structures and that the structural properties shared by sign and creole languages are not accidental. It is clear that there is no genetic affiliation between these two groups of languages given that they belong to two different modalities. Language contact between these two groups of languages is also excluded as a possible explanation for the observed similarities since most of the sign languages do not coexist with creole languages. Hence, there is a need for an adequate explanation. In this chapter, I discuss the similarities described so far between these two language groups which can be explained by acquisition conditions. Compelling evidence comes from different areas: homesigns, young sign languages (e.g. Nicaraguan Sign Language), and acquisition studies of creole languages (Adone 2001b, 2008a; Adone/Vainikka 1999). A few scholars have argued that creolisation takes place in the formation of sign languages (Kegl/Senghas/Coppola 1999; Adone 2001b; Aronoff/Meir/Sandler 2005) ⫺ an issue that will be addressed in this chapter in more detail. Thus, the coupling of studies on creole and sign languages, especially young sign languages, can be expected to provide a unique perspective on the early stages of language genesis and development. This chapter has two goals. First, it analyses creolisation as a process observed in the genesis of creole languages. Second, it discusses the development of sign languages as a case of creolisation that bears similarities to the development of creole languages. Here, I take a psycholinguistic stand on creolisation thus allowing for cross-modal comparison. Creolisation in the broader sense can be regarded as a process that takes place under certain specific circumstances of acquisition, that is, when children are not exposed to a conventional language model. Studies conducted by Newport (1999) and others have shown that, in the absence of a conventional language model, children use some of the other abilities they are equipped with to learn a language. They are even capable of surpassing the inconsistent language model. They are also able to regularise their input as seen in the case of children acquiring creole languages today (Adone forthcoming). As the case of homesigners reveals, children are also capable of inventing their own linguistic systems (Goldin-Meadow 2003). Taken together, these studies bring to light the significant contribution of children to language acquisition.
863
864
VII. Variation and change This chapter is organised as follows: section 2 examines the term creolisation in depth as it is understood in the field of creole studies. In section 3, we will introduce two types of emerging sign languages, Deaf community sign languages and young village sign languages. In section 4, the most important structural similarities between creole languages and sign languages are presented while in section 5, the acquisition conditions of creole languages and sign languages are compared. The emphasis here is on the role that children play in the acquisition of these languages. Section 6 discusses the implications of the findings with respect to creolisation and recreolisation. This section highlights that humans have ‘language-ready’ brains. In the absence of adequate input, humans, i.e. children, still create a language system. Section 7 summarizes the main ideas presented in this chapter and concludes that creolisation takes place across modalities.
2. Creolisation: state of the art 2.1. Creolisation within the field of creole studies Without doubt, creolisation has been one of the most controversial issues within the field of creole studies. In this section, we will address two central issues in the discussion on creolisation that have influenced the field of creole studies, namely, the genesis of creole languages per se, and the ‘exceptional’ status of creole languages.
2.1.1. The genesis of creole languages Several accounts of creolisation have been articulated so far. In some of the earliest ones, it is assumed that a pidgin precedes a creole, whereas according to other accounts, a pidgin is not necessary for a creole to emerge. The classical view that creolisation is a process that takes place when a pidgin becomes the mother tongue of its speakers has been supported by several scholars (Hall 1966; Todd 1990; among others). According to this view, a pidgin is a structurally and lexically simplified system which emerges in a language contact situation, and eventually develops into a fully-fledged language, that is, a creole. As a simplified system, a pidgin typically has the following characteristics: a very restricted lexicon, no inflectional morphology, no functional categories, and a highly variable word order. In contrast, a creole system typically shows an elaborate lexicon, derivational and some inflectional morphology, functional categories, and an underlying word order. The creole system, as compared to the pidgin one, is less variable. The creolisation process is assumed to take place as soon as the first generation of children acquires the pidgin as a first language. Another view is that creolisation takes place when pidgins expand into creole languages without nativisation. Scholars such as Sankoff (1979), Chaudenson (1992), Singler (1992, 1996), Arends (1993), and McWhorter (1997) have argued against the nativisation-based view of creolisation. Based on a detailed historical reconstruction, scholars have argued that creolisation can be a gradual process taking place over sev-
36. Language emergence and creolisation eral generations of speakers (Arends (1993), Plag (1993), and Roberts (1995) for Hawaiian Creole; Baptista (2002) for Cape Verde Creole; Bollée (2007) for Reunion Creole). Under this view, creolisation equates to language change and the development of grammatical structures in the formation of creoles can be accounted for by universal principles of grammaticalisation (e.g. Plag 1993; Mufwene 1996). This view of creolisation can be assumed to account for the emergence of some creoles. More recently, some scholars have discussed grammaticalisation and creolisation as processes that are not mutually exclusive (Plag 1998; Adone 2009; among others). Taking a universalist perspective on creolisation, Bickerton (1981, and subsequent work) rejected this view and proposed that there is a break in the transmission between the lexifier languages and the creoles. This has led him to argue that creolisation must be abrupt if there is a breakdown in transmission of language. In his Language Bioprogram Hypothesis, Bickerton (1984) argues that adult pidgin speakers pass on their pidgin to their children. These children, that is, the first generation of creole speakers, are thus exposed to deficient input. As a result, they have to rely on their ‘Language Bioprogram’ to invent language. The basic idea here is that creolisation is an instance of first language acquisition in the absence of input. It is nativisation which takes place as soon as a pidgin becomes the first language for its speakers (cf. Bickerton 1974; Thomason/Kaufmann 1988; Adone 1994, 2001b, 2003; Mufwene 1999). On the basis of a series of well-documented socio-historical facts, Arends (1993), Singler (1993, 1996), and others, questioned the plausibility of Bickerton’s claim. Since then the role of children and adults in the process of creolisation has become a subject of considerable debate within the field. In the current debate, most of the scholars adhere to the view that adults rather than children must have been the ones creolising the system (e.g. Lumsden 1999; Lefebvre 1998; Siegel 1999; Veenstra 2003; Singler 1992). For other scholars (e.g. Bickerton 1984, 1990; Adone/Vainikka 1999; Adone 2001b; Bruyn/Muysken/Verrips 1999; Mufwene 1999), children were the ones mainly responsible for creolisation. Following DeGraff (1999), nowadays many scholars within the field assume that both adults and children must have contributed to the process of creolisation (cf. Plag 1998; Baptista 2002; among others). However, little research has been undertaken to provide evidence for either view. One reason for this are insufficient historical records on the development of most creole languages, especially during the early stages of formation within colonial plantation communities in the seventeenth and eighteenth century. Most creole languages emerged in the context of European colonial expansion which was practised from the sixteenth century onwards and was characterised by rigid social stratification of the society, master-slave relationships, and plantation environments ⫺ all important socio-historical components typically present in creolisation (Arends/Muysken/Smith 1995). It is this socio-historical dimension that distinguishes creole languages from non-creole languages (see DeGraff 2003).
2.1.2. On the ‘exceptional’ status of creoles The second question which has been controversially discussed concerns the exceptional status of creoles. Muysken (1988) already argued that creole languages are not exceptional. However, the debate peaked with McWhorter’s (2001) proposal in which he presented arguments for a distinction between creole and non-creole languages.
865
866
VII. Variation and change McWhorter argues that creole grammars can be regarded as “the world’s simplest grammars”, a view that has evoked much controversy among scholars in the field of creole studies. Behind the view of McWhorter is the widely spread assumption that creole languages are unique in the sense that they form a distinct and fairly homogenous group of languages with special features that set them apart from other languages. This view has been referred to in the literature as “creole exceptionalism”. Numerous scholars within the field of creole studies have argued against creole exceptionalism or uniqueness (DeGraff 2003; Mufwene 2000; Ansaldo/Matthews/Lim 2007). According to them, in terms of structure, creole languages are neither simple nor inferior as compared to other languages. The only relevant difference between creole and non-creole languages can be explained in terms of age. Creole languages, like sign languages, are ‘young’ languages. Many of the creole languages emerged in the eighteenth century, and some are about 200 years of age. The recent emergence of these languages has fortunately enabled us to observe some of the developmental stages languages go through, and thus to gain insights into language emergence and development. As we will see in the following sections, creole languages are not structurally exceptional. In fact, the similarities between creole and sign languages are so striking that it is unlikely that they are coincidental. One last point needs to be mentioned here. Studies that have focussed on the socio-historical/cultural factors involved in creolisation have clarified what E-creolisation is. This process takes place on the societal level within a specific time frame (i.e. colonisation) and within a specific type of society (master-slave relation). E-creolisation will not be discussed further in this paper because it is not relevant in the present context. However, we note that the deaf communities in Europe, for instance, went through periods of societal suppression due to the widespread belief in oral education following the International Congress on the Education of the Deaf held in Milan in 1880 (see chapter 38 for details).
2.2. Creolisation within the field of sign language research As briefly mentioned in section 1, a number of scholars within the field of sign language research have drawn attention to similarities between sign and creole languages (Deuchar 1987; Fischer 1978; Woodward 1978; Gee/Goodhart 1988). Both Woodward (1978) and Fischer (1978) have argued that ASL is the outcome of creolisation of indigenous American gestures and sign systems and the French Sign Language brought to the United States by Laurent Clerc in the early nineteenth century. Fischer, for example, compares ASL to creole languages and argues that ASL looked structurally similar to creole languages (see section 4 for details). ASL, like Jamaican Creole, also had a three ‘lect’ distinction (acrolect, mesolect, and basilect). While Fischer focuses on the syntactic similarities between ASL and creole languages as well as on the parallels in the social situation, Woodward discusses the lexical change in ASL and creole languages. More recently, Fischer (1996) presents interesting evidence for creolisation in the number system of present-day ASL. She argues that the ASL number system is based on “a hybridisation of American and French numbers” (1996, 1). A closer look at ASL number signs for 6⫺9 shows an innovation which is typically seen in creole languages
36. Language emergence and creolisation in that they go beyond the languages that provide the lexical bases. Further evidence for creolisation is seen in the randomness of the mixing between American and French number forms. Gee/Goodhart (1988) point out striking grammatical similarities between ASL and creole languages such as (i) the use of topic-comment word order, (ii) lack of tense marking, but a rich aspectual system, (iii) use of postverbal free morphemes for completive aspect, and (iv) absence of pleonastic subjects and passive constructions. Some of these features will be discussed in section 4 (also see Kegl/Senghas/Coppola 1999; Kegl 2002; Meier 1984),
3. Emerging sign languages Meir et al. (2010a) make a distinction between two types of emerging sign languages: Deaf community sign languages and village sign languages. According to this distinction, Deaf community sign languages arise when signers of different backgrounds are brought together in cities or schools. Nicaraguan Sign Language, Israeli Sign Language, and Mauritian Sign Language are typical examples for this type of sign language. Village sign languages, on the other hand, emerge in small communities or villages in which a number of deaf children are born. Transmission of these sign languages takes place within and between families. Socially speaking, these villages are more or less insular. In section 3.2, we will briefly address one recently described young village sign language, Al-Sayyid Bedouin Sign Language (see chapter 24 for discussion of other village sign languages). Yet another type of sign languages worth mentioning here are alternate (or secondary) sign languages. These sign languages have been described in the literature as linguistic systems that are used by both deaf and hearing people. For the hearing community, these sign languages function as second languages and are mostly used for cultural reasons (e.g. for ceremonies, when silence is requested in the presence of sacred objects, or in the case of a death), thus serving a secondary purpose (Adone/ Maypilama 2012; Cooke/Adone 1994; Kendon 1988; Maypilama/Adone 2012). Because of their apparent restricted use, alternate sign languages are generally not regarded as full-fledged languages (see chapter 23, Manual Communication Systems: Evolution and Variation, for further discussion).
3.1. Deaf community sign languages Recent studies have documented the genesis of a few sign languages. Senghas (1995) and Kegl, Senghas, and Coppola (1999) report one of the first cases of the birth of a natural language, namely Nicaraguan Sign Language (ISN) in Nicaragua. Senghas (1995) tracks the historical development of ISN which started in the late 1970s in Nicaragua when the government established special education programs for deaf children in the capital Managua. Deaf children from scattered villages were sent to this school and brought with them their individual homesign systems. Although teachers insisted on an oral language approach, that is, the development of oral language skills
867
868
VII. Variation and change as well as lip-reading ability in Spanish, children used gestures and signs with each other. In this environment of intense contact, the deaf children developed a common system to communicate with each other, and within only a couple of years, a new sign language emerged. These signs formed the input for new groups of deaf children entering school every year. Current work on ISN reveals the gradual development of grammatical features such as argument structure, use of space, and grammatical markings, among others, across different cohorts of learners (Senghas 2000; Senghas et al. 1997; Coppola/So 2005). Adone (2007) investigated another interesting case of recent language genesis in the Indian Ocean on the island of Mauritius. In Mauritius, the first school for the deaf opened in September 1969 in Beau Bassin, one of the major cities on the island. According to information disclosed by the Society for the Welfare of the Deaf (Joonas, p.c.), in the early seventies deaf children were recruited across the island and sent to school in Beau Bassin. Children stayed in dormitories at school during the week and were sent back to their villages to spend the weekends with their families. In 2004, Adone and Gébert found several generations of Mauritian Sign Language (MSL) users. Parallel to MSL users, in 2004, I discovered a small group of children in the north of the island, Goodlands, who were using a sign system different from that of the deaf population in Beau Bassin. Given that these children did not have contact with the deaf community (children and adults) in Beau Bassin, it seemed worthwhile to take a closer look at them. There were around 30 children of different ages. The older children between 6 and 7 years of age were taught to lip-read, read, and write by a teacher who had no training in deaf education. The younger ones were allowed to play and interact freely with each other as communication with the teachers was extremely difficult. Based on first-hand observations of the Mauritian situation, I proposed that this system could easily be regarded structurally as a homesign system and that it provided us with insights into the earliest stages in the formation of a sign language (Adone 2007, 2009). Extrapolating results from work done so far, it becomes clear that MSL, in contrast to other established sign languages, has developed little morphology. This is evidenced by the distribution of plain, spatial, and agreement verbs: there are less agreement verbs than plain and directional verbs. Native signers use SVO order quite frequently, but they do show variability.
3.2. Young village sign languages Another extremely interesting case of an emerging sign language is seen in the development of Al-Sayyid Bedouin Sign Language (ABSL). This sign language emerged in the Al-Sayyid Bedouin community (Negev region, Israel), “a small, insular, endogamous community with a high incidence of nonsyndromic, genetically recessive, profound prelingual neurosensory deafness” (Aronoff et al. 2010, 134). According to researchers, this sign language is approximately 70 years old (Sandler et al. 2005). The sign language is remarkable for a number of reasons. First, while the two examples discussed in the previous section illustrate clearly the case of children deprived of exposure to language, who as a result invent a new system, ABSL exemplifies language emergence within a village community with little influence from sign languages in the environment. Second, ABSL appears to have a regular syntax ⫺ SOV and head-modi-
36. Language emergence and creolisation fier order (Sandler et al. 2005) ⫺ and regular compounding (Meir et al. 2010b), but has no spatial morphology and has also been claimed to lack duality of patterning (Aronoff et al. 2010). Thus, these three studies on ISN, MSL, and ABSL, while tracking individual linguistic developments, bring to light the various stages and mechanisms involved in the creation of a new sign language.
4. Structural similarities between creole and sign languages In this section, I focus on some key structural similarities between sign and creole languages reported in the literature. A complete overview of the attested parallels would go beyond the scope of this study. Therefore, I will address only five aspects: word order, aspectual marking, reduplication, serial verb constructions, and morphological structure. At this point, it is important to establish a distinction that will be relevant throughout the whole discussion, namely the distinction between ‘mature, established’ and ‘young’ sign languages. This distinction is crucial because not only age plays a role in the development and life-cycles of languages but also the degree of conventionalisation. The term ‘mature, established’ language is used mainly to refer to sign languages which are reported to have a history of more or less 200 years, such as ASL, British Sign Language (BSL), German Sign Language (DGS), and others. There are other sign languages such as Adamorobe Sign Language (AdaSL) in Ghana (Nyst 2007) or Yolngu Sign Language (YSL), an alternate sign language used in the Top End of the Northern Territory of Australia, which, according to this criterion, are also mature, but these languages seem to be linguistically different from the mature established sign languages mentioned above. In contrast, the term ‘young’ sign language refers to sign languages such as ISN, ABSL, Israeli Sign Language, and MSL, which all have a relatively short history and are not yet (fully) established. As shown in section 3.1, in both ISN and MSL, we are dealing with similar conditions playing a role in the genesis of these languages. Another crucial aspect in the below discussion is the non-relatedness of creole and sign languages. Languages from these two groups are not genetically related to each other and they generally do not have any close contact with each other to the extent that they could influence each other directly. As a result, it is safe to assume that the similarities found between these two language groups cannot be explained in terms of genetic affiliation or language contact. Furthermore, creole languages are spoken languages, thus belonging to the auditive-oral modality, whereas sign languages use the visual-manual modality.
4.1. Word order Many of the earlier studies on word order in both creole and sign languages concentrated on discourse notions such as topic and comment. Researchers in the early seventies proposed that creole languages have no basic word order, and that the order of
869
870
VII. Variation and change sentence elements instead depended on discourse ⫺ an issue heavily debated in the eighties. Over the years, an increasing body of studies on individual creoles has shown that creole languages seem to have a basic word order in matrix and embedded sentences, i.e. SVO, as well as hierarchical structure (Adone 1994; Baptista 2002; Veenstra 1996; Syea 1993). Interestingly, a similar debate took place in sign language linguistics circles: while some early studies stressed the importance of discourse notions (e.g. Ingram 1978), later research demonstrated for various sign languages that they do have a basic word order ⫺ be it SVO or SOV (see chapter 12, Word Order, for discussion and methodological problems). Clearly, the issue of word order is closely related to information packaging, that is, the organisation of sentences to convey information structure, such as topic and focus. Topic and focus are relative terms used to refer to old and new information, respectively (see chapter 21, Information Structure, for details). It has been claimed independently that both creole and sign languages, even though they have basic word order, make heavy use of topic-comment structures. Also, it has been observed for various sign languages that elements may be repeated in sentence-final position in order to foreground (i.e. focus) these elements. In ASL, for instance, wh-signs (1a), verbs (1b), and quantifiers may be doubled (Petronio 1993; in Sandler/Lillo-Martin 2006, 417 f.). wh
(1)
a. who buy c-a-r who ‘Who bought the car?’
[ASL] hn
b. anne like ice-cream like ‘Anne likes ice cream.’ Similar doubling structures are also attested in various creole languages ⫺ however, there is a restriction on the elements that can be doubled as well as on the position of the doubled element. Generally, the doubled element appears sentence-initially. Investigating the phenomenon for Isle de France Creole (a French-based creole), Corne (1999) refers to it as ‘double predication’. A look at the Mauritian Creole examples in (2), shows that in this language, verbs can be doubled (2a) but wh-words cannot (2b). (2)
a.
Galupe ki mo fin galupe run that I asp run ‘I ran a lot.’ b. * kisana in aste loto la kisana? who asp buy car det who ‘Who bought the car?’
[Mauritian Creole]
These surface similarities between creole and sign languages, I believe, are best explained as resulting from their discourse-oriented character. These languages use prosodic prominence when elements are focused.
4.2. Aspectual marking The use of aspectual marking is another area of similarity between creole and sign languages. Over the years, it has become clear that the majority of creole languages
36. Language emergence and creolisation
871
have both tense (anteriority) and aspect (completion) markers. Furthermore, in the past few decades, several studies have shown that, across creole languages, tense and aspect markers were attested in the early stages of creolisation (Arends 1994; Bickerton 1981; Bollée 1977, 1982; Baker/Fon Sing 2007; Corne 1999). Both tense and aspect markers can be used with verbs and predicative adjectives in creole languages. The prominence of aspectual marking in the creole TAM-system has led Bickerton (1981) and others to argue that the system is primarily aspect-oriented. Studies on sign languages also reveal that verbs and predicative adjectives may inflect for aspect. Inflection for tense, however, is not attested. Klima and Bellugi (1979) provide an overview of aspectual inflections in ASL, such as iterative, habitual, and continuative (see also Rathmann 2005). Similar aspectual markings with the same functions have been reported for other ‘mature’ as well as ‘young’ sign languages, for example, BSL (Sutton-Spence/Woll 1999) and MSL (Adone 2007). Another very interesting phenomenon is the use of the sign finish in ASL and other sign languages to mark completive aspect (Fischer 1978; Rathmann 2005; see chapter 9, Tense, Aspect, and Modality, for further discussion). Interestingly, we find a parallel development in creole languages. Most creole languages have developed a completion marker for aspect which derives from the superstrate/lexifier languages involved in their genesis. In the case of Mauritian Creole, French is the lexifier language and the aspectual marker fin derives from the French verb finir (‘finish’; see example (3b) below). This development is interesting for two reasons. First, in one of the emergent sign languages studied, MSL, the sign finish is also used to mark completion across generations of signers. Second, Adone (2008a) found that young homesigners overgeneralise the gesture ‘GO/ END/FINISH’ to end sentences in narratives. Taken together, this provides empirical support for the view that aspectual marking, i.e. completion, is part of the basic set of features/markings found in the initial stages of language genesis and development. Further evidence comes from studies on first language acquisition of spoken languages which indicate that cross-linguistically children seem to mark aspect first (Slobin 1985).
4.3. Reduplication The next two structures to be discussed, namely reduplication and serial verb constructions, have been selected because of their relevance in the current theoretical discussion on recursion as a defining property of UG (Roeper 2007; Hauser/Fitch 2003). A closer look at reduplication reveals that both language groups commonly make use of reduplication, as seen in the following examples from spoken creoles (3a⫺d) and DGS (4a⫺b) (in the sign language examples, reduplication is marked by ‘CC’). (3)
(4)
a. Olabat bin wokwok oldei ‘They were walking all day.’ b. … lapli lapli ki fin tonbe … ‘… there was a lot of rain …’ c. Ai bin luk munanga olmenolmen ‘I saw many white men.’ a. night index1 driveCC
[Ngukurr Kriol] [Mauritian Creole] [Ngukurr Kriol] [DGS]
872
VII. Variation and change ‘I drove the whole night.’ b. garden childCC play ‘The children are playing in the garden.’ In both creole and sign languages, verb reduplication fulfils (at least) two functions: (i) realization of aspectual meaning ‘to V habitually, repeatedly, or continuously’, as in (3a) and (4a); (ii) expression of intensive or augmentative meaning in the sense of ‘V a lot’, as in (3b). Such patterns are attested in various creoles including Frenchbased, Portuguese-based, and English-based creoles (Bakker/Parkvall 2005) as well as in established and emerging sign languages (Fischer 1973; Senghas 2000). Adone (2003) drew a first sketch of the similarities between creoles and sign languages with respect to reduplication to mark plurality, collectivity, and distribution in nominals. Interestingly, among these three distinct functions, plurality (3c, 4b) and collectivity are found to be widespread in both creole and sign languages (cf. Bakker/ Parkvall 2005; Pfau/Steinbach 2006). Given the extensive use of reduplication in these two language groups, it is plausible to argue that (full) reduplication is a syntactic structure that emerges early in language genesis because it is part of the basic set of principles available for organising human linguistic behaviour, a view that has been previously taken by several scholars (Bickerton 1981; Goldin-Meadow 2003; MyersScotton p.c.).
4.4. Serial verb constructions Serial verb constructions (SVCs) are typically defined as complex predicates containing at least two verbs within a single clause. Classical examples come from Kwa languages of West Africa, and the Austronesian languages of New Guinea and Malagasy. In the field of creole studies, SVCs have long been been regarded as evidence ‘par excellence’ for the substrate hypothesis according to which creole grammars reflect the substrate languages that have been involved in the genesis of creole languages (e.g. Lefebvre 1986, 1991). While the role played by substrate languages cannot be denied, I believe that universal principles operating in language acquisition are likely to offer a better explanation on the formation of creole languages. A look at SVCs shows that some properties can be regarded as core properties; these include: (i) only one subject; (ii) no intervening marker of co-ordination or subordination; (iii) only one negation with scope over all verbs; (iv) TMA-markers on either one verb or all verbs; (v) no pause; and (vi) optional argument sharing (Veenstra 1996; Muysken/Veenstra 1995). Several types of SVCs have been distinguished in the literature. Here, however, I will discuss only two types, which are found in both creole and sign languages (see Adone 2008a): (i) directional SVCs involving verbs such as ‘run’, ‘go’, and ‘get’, as illustrated by the Seychelles Creole example in (5a); (ii) benefactive SVCs involving the verb ‘give’, as in the Saramaccan example in (5b) (Byrne 1990; in Aikhenvald 2006, 26):
36. Language emergence and creolisation (5)
(6)
a. Zan pe tay Praslin al sers son marmay komela Zan asp run Praslin go get 3poss child now ‘Zan is getting his child from Praslin now.’ b. Kófi bi bái dí búku dá dí muyé Kofi tns buy the book give the woman ‘Kofi had bought the woman the book.’ a. person limp-cllegs move-in-circle ‘A person limping in a circle.’
873 [Seychelles Creole]
[Saramaccan]
[ASL]
/betalen/
b. please index1 pay index1 1give2 index2 pu ‘I want to pay you (for it).’
[NGT]
Supalla (1990) discusses ASL constructions involving serial verbs of motion. In (6a), the first verb expresses manner, the second one path (adapted from Supalla 1990, 134). The NGT example in (6b), from Bos (1996), is similar to (5b). It is interesting to note that the mouthing betalen (‘to pay’) stretches over both verbs as well as an intervening index, which is evidence that the two verbs really form a unit (pu stands for ‘palm-up’). Senghas (1995) reports on the existence of such constructions in her study on ISN. Senghas, Kegl, and Senghas (1997) examine the development of word order in ISN and show that the first generation signers have a rigid word order with the two verbs and the two arguments consistently interleaved in a N1V1N2V2 pattern (e.g. man push woman fall). In contrast, the second generation of signers initiates patterns such as N1N2V1V2 (man woman push fall) or N1V1V2N2 (man push fall woman). These patterns illustrate that signers in the first generation have SVSV (not an SVC) while second generation signers prefer both SOVV and SVVO patterns. These latter structures display the defining core properties of SVC, syntactically and prosodically.
4.5. Morphological structure − an apparent difference Aronoff, Meir, and Sandler (2005) have argued that many sign languages, despite the fact that they are young languages, paradoxically show complex morphology. Interestingly, most of the attested morphological processes are simultaneous, i.e. stem-internal, in nature (e.g. verb agreement and classifiers) while there is only little concatenative (sequential) morphology (which usually involves the grammaticalisation of free signs, as e.g. in the case of the ASL zero-suffix). In an attempt to explain this paradox, Aronoff, Meir, and Sandler (2005) argue that the complex morphology found in sign languages is iconically motivated. On the one hand, reduplication is clearly iconic (see section 4.3.). On the other hand, sign languages being visual languages, they are uniquely suited for reflecting spatial cognitive categories and relations in an iconic way. Given this, sign languages do lend themselves to iconicity to a much higher degree than spoken languages do. As a result, it is not surprising that even young sign languages may develop surprisingly complex morphology. As an example, consider spatial inflection on verbs, that is, the classical distinction between plain, spatial, and agreement verbs, which is pervasive in sign languages around the world (see chapter 7 for details). Padden et al. (2010) perused the agree-
874
VII. Variation and change ment and spatial types of verbs in two ‘young’ sign languages (ABSL and Israeli SL) and found compelling evidence for the development from no agreement to a full agreement system. MSL, also a young sign language, confirms this pattern: plain and spatial verbs are common while agreement verbs are less common (Gébert/Adone 2006). The reason given for the scarcity of agreement verbs is that these verbs often entail grammatical marking of person, number, and syntactic roles. If we assume that spatial verbs involve spatial mapping but no morphosyntactic categories, then we expect them to develop earlier than agreement verbs, that is, we expect the grammatical use of space to develop gradually. YSL seems to confirm this hypothesis. Although this sign language is a mature sign language, it still has not developed much morphology. A careful examination of verbs in this sign language shows that both plain and spatial verbs are abundant; the verbs see, look, come, and go, for instance, may be spatially modified to match the location of locative arguments. In contrast, only two instances of agreement verbs, namely give and tell, have been observed so far (Adone 2008b).
4.6. Summary To sum up, we have seen that there are some striking structural similarities between creole and sign languages, and that these are far from superficial. Due to space limitations, only a few aspects have been singled out for comparison. In addition, both creole and sign languages seem to share similar patterns of possessive constructions (simple juxtaposition of possessor and possessee), rare use or lack of passive constructions, and paucity of prepositions, among others. Obviously, all of these structures are not specific to these two types of languages as they are attested in other languages, too. What makes these structures highly interesting is that they are available in these two ‘young’ language groups. Having established the structural similarities between these language groups, we may turn now to their genesis. A closer look at their acquisition conditions makes the comparison even more compelling.
5. Similarities in acquisition conditions 5.1. The pidgin-creole language context Bickerton drew some very interesting parallels between creole languages and first language acquisition data to support his view of a Language Bioprogram which children can fall back on when they do not have access to sufficient input. This view has been much debated and is still a source of dispute within the field of creole studies. Investigating data from Hawaiian Pidgin, Bickerton (1981, 1984, 1995) assumed that adult pidgin speakers learned the target language of the community, which was English, and passed on fragments of that language to their children. Several studies on various pidgins have reported a restricted vocabulary, absence of morphology, and high variability, among other features. Based on the structural similarities between creoles and the initial stages in first language acquisition, Bickerton proposed that a similar system, protolanguage, must have been the evolutionary precursor to language. In broad terms,
36. Language emergence and creolisation protolanguage is regarded as “the beginnings of an open system of symbolic communication that provided the bridge to the use of fully expressive languages, rich in both lexicon and grammar” (Arbib/Bickerton 2010, vii). As such, a pidgin can be understood as the precursor of a creole. Due to space limitations, I cannot discuss this matter any further; however, I would like to add that the evidence for such a comparison is compelling (Bickerton 1995; Arbib/Bickerton 2010). I have discussed elsewhere that the acquisition of Mauritian Creole syntax confirms Bickerton’s nativist view to some extent. Additional support comes from Ngukurr Kriol (Adone 1997). It is important to note that there are two situations to be distinguished: the first generation of creole speakers, and the subsequent generations of creole children. The major difference between the first and subsequent generations lies in their access to input. The situation is less extreme for subsequent generations of creole-acquiring children than for the first generation because the former do indeed have access to input, which, however, can be highly variable and unsystematic in nature (Adone 1994, 2001b). An example is the use of lexical and null subjects. The overall development of null subjects in Mauritian Creole showed a U-shape development which can be partly explained by the highly unsystematic nature of the input (Adone 1994), in particular, the unsystematic use of null/lexical subjects by adults. This in turn can be partly explained by the fact Mauritian Creole is mostly an oral language. It is safe to say that the language is in a state of flux. There is no conventional language model to reinforce the knowledge of the adult native speaker. Children spend the first years of their lives acquiring a creole spoken in their environment. When they go to school, they become literate in English or French, languages which are not always spoken in their environment (Florigny 2010). Given that they do not get a conventional language model as input, they develop an ‘approximate’ open system.
5.2. The homesign − sign language context There are various circumstances in which deaf children acquire language. First, there are those deaf children who grow up with deaf parents and therefore have access to their parents’ sign language from birth. This type of first language acquisition is known to proceed just like normal language acquisition (see chapter 28 for details). In a second group, we find deaf children who are surrounded by hearing people with no or very little knowledge of a sign language. The statistics indicate that in roughly 90 % of cases, deaf children are born to hearing parents. For the United States, for instance, it has been reported that only about 8.5 % of deaf children grow up in an environment in which at least one of the parents or siblings uses a sign language. As such, in most homes, the adult signers are non-native users of a sign language. In these cases, deaf children are in a position similar to that of the first generation of creole-speaking children whose parents are pidgin speakers. While we still do not know the exact nature of the input to the first generation of creole-speaking children, we have a better picture in the case of the deaf children. Hearing parents often use gestures to communicate with their deaf children. The gestural system generated by adults is structurally similar to pidgins in that it is irregular, arbitrary, and unsystematic in nature (Goldin-Meadow 2003; Senghas et al. 1997).
875
876
VII. Variation and change This spontaneous and unsystematic repertoire of gestures does not provide the deaf children with sufficient input to acquire their L1, but may allow them to develop a homesign system. Homesign is generally regarded as an amorphous conglomeration of gestures or signs invented by deaf children in a predominately hearing environment without access to sign language input. Homesign can be regarded as a possible precursor of a sign language in the same way as a pidgin is a precursor for a creole. In both the pidgin and the homesign contexts, children have no access to a conventional language model (see chapter 26, Homesign, for further discussion). It is crucial to note that creole and sign languages are prevalently languages without standardised written forms. More importantly, deaf children and creole-speaking children do not become literate in their first language for various reasons. Many deaf children are sent to mainstream schools, thus forced to integrate into the hearing community and learn a spoken language at the expense of their sign language. Studies on children homesigners around the world show that they develop a successful communicative system (Goldin-Meadow 2003; Adone 2005). Taken together, these studies strongly suggest that children play an important role in acquisition.
5.3. The role of children in creolisation Now, if we take creolisation to be nativisation, we need to take a closer look at the role played by children during this process. Before we move on to the discussion, let us establish some conceptually necessary assumptions. There is little doubt that the ability to acquire and use language is a species-specific property of humans. Although animals can also demonstrate coordination of their behaviour, their communication skills are limited (cf. Hultsch/Mundry/Todt 1999). It is also uncontroversial that language, as one of the human activities, is highly rule-governed. Within the field of generative linguistics, it is generally assumed that humans have a ‘language faculty’ that is partially genetically determined. Evidence for this genetic predisposition for language comes from studies supporting the view of ‘a critical period’, or, most recently, ‘sensitive periods’ for language acquisition. Evidence for this sort of constraint on human language development comes from children who have undergone hemispherectomy and from feral or seriously deprived children like Genie, who did not have exposure to language until the age of 13 (Curtiss 1977). Taken together, these findings indicate clearly that complete and successful acquisition depends very much on early exposure to a language model. A closer look at cross-linguistic studies emphasises the ability of children to overgeneralise and regularise. Several cases of overgeneralisations in various languages have been documented by Pinker (1999). He argues that although children are capable of generalising and overgeneralising, such overgeneralisations are actually rare in the acquisition data from English and German (among other languages). We might thus ask ourselves why these children apparently rarely overgeneralise. Some scholars might argue that the parents’ corrections could be a reason. However, if this was the case, then we would not expect any overgeneralisations at all. Alternatively, a more interesting explanation might be that these languages are all established languages, that is, children have access to a conventional language model that they have to follow. As a result, there is no ‘space’ for overgeneralisations. Cases of overgeneralisations in the
36. Language emergence and creolisation child’s grammar illustrate the creativity of children and thus confirm the children’s ability to generate rules and to apply them elsewhere. This explains why English-speaking children regularise the past tense -ed and produce forms such as goed for a period of time and only later acquire the irregular target form went. By the time children acquire went, they have encountered went in the input many times. Indeed Pinker (1999) argues that the frequency of the form contributes to reinforcing the memory for the correct form went (also cf. breaked, eated, and the ‘double inflected’ forms broked, ated). Looking at the case of children acquiring creole languages offers us a unique perspective on the system these children generate when confronted with a variable, nonconventional input. Adone (2001b, forthcoming) claims that creole-acquiring children seem to have more ‘freedom’ to generalise and that they do so extensively. She presents evidence for children’s creative abilities deployed while forming innovative complex constructions such as passive, double-object, and serial verb constructions. Obviously, these children regularise the input and reorganise inconsistent input, thus turning it into a consistent system (see Adone (2001a) for further empirical evidence involving the generalisation of verbal endings to novel verbs by Seychelles Creolespeaking children). In an artificial language study, Hudson Kam and Newport (2005) investigated what adults and children learners acquire when their input is inconsistent and variable. Their results clearly indicate that adults and children learn in different ways. While adults did not regularise the input, children did. Adult learners reproduced variability detected in the input whereas children did not learn the variability but instead systematised the input. In this context, studies examining the sign language acquisition of Simon, a deaf child of deaf parents, are also interesting (Singleton 1989; Singleton/Newport 2004; Ross/Newport 1996; Newport 1999). Crucially, Simon’s deaf parents were late L2 learners of ASL. Their use of morphology and complex syntax was inconsistent when compared to the structures produced by native speakers. However, the difference between Simon’s morphology and syntax and that of his parents was striking. Overall, Simon clearly surpassed the language model of his parents by regularising each morpheme he was exposed to and using them in over 90 % of the contexts required. With this study, we have strong evidence for children’s capacity of going beyond the input they have access to. There is a substantial body of psycholinguistic studies that stress the ability of children to ‘create’ language in the absence of a conventional model. But it is also a wellestablished observation that humans in general are capable of dealing with a so-called ‘unstructured environment’ in a highly systematic way. Interestingly, a wide array of empirical studies reveals the human ability to learn and to modify knowledge even in the absence of ‘environmental systematicity’, that is, to create a system out of inconsistency (Frensch/Lindenberger/Ulman 1999). Taken together, empirical studies on the acquisition of creole and sign languages shed light on the human ability to acquire language in the absence of systematic, structured input for two reasons. First, children acquiring a creole today illustrate the case of children who are confronted with a highly variable input. Second, the study of homesigners illustrates the case of children without language input. In both cases, the children behave similarly to their peers who receive a conventional language model, that is, they generalise, regularise, and reorganise what they get as input.
877
878
VII. Variation and change More recently, several studies have focussed on the statistical learning abilities that children display (Safran/Aslin/Newport 1996; Tenenbaum/Xu 2005). Several observations indicate that we cannot exclude the possibility of children analysing the input for regularities and making use of strong inference capacities during the acquisition process. Various experiments conducted with a child population seem to support the hypothesis that children are equipped with the ability to compute statistics. If children can overgeneralise, regularise, and create new structures in language, it is also plausible that they can learn statistically. However, future work is crucial to determine the extent to which this statistical ability is compatible with a language acquisition scenario firmly grounded in a UG framework. To sum up, we have seen that children are equipped with language-creating skills such as regularising and systematising a (possibly impoverished) linguistic system. Now that we have been able to establish the role of children in creolisation, we will turn to the role adults play in the creolisation process.
5.4. The role of adults in creolisation Several studies concerned with the creolisation of creole languages assume that adults are the major agents (Lumsden 1999; Mufwene 1999). More recently, interdisciplinary studies have helped to clarify the role of adults in language acquisition. Findings on L2 acquisition (Birdsong 2005; Gullberg/Indefrey 2006) reinforce the view that the ability for language learning decreases with age. Based on results from experimental second language acquisition data, it can be argued that in the initial stages, pidgin speakers must have relied on both UG and transfer. This constitutes evidence for the main role played by adults in pidginisation. However, they played a less important role in the creolisation process. Their role here consists mainly of providing input to children. Findings in various sub-disciplines of cognitive science show clearly that adults approach the language acquisition task differently. First of all, adult learners, in contrast to children, have more elaborate cognitive capacities, but no longer have a child-like memory and perceptual filter (Newport 1988, 1990; Goldowsky/Newport 1993). In particular, differences between adults and children in terms of memory have consequences on language abilities (Caplan/Waters 1999; Salthouse 1991; Gullberg/Indefrey 2006). In addition, Hudson Kam and Newport (2005) demonstrated very clearly that the learning mechanisms in adults and children are different: adults, in contrast to children, do not regularise unpredictable variation in the input. Interestingly, differences between adults and children are not only observed in language acquisition but also in gesture generation. In a series of experiments, GoldinMeadow, McNeill, and Singleton (1996) demonstrated that when hearing adults generated gestures, their goal was to produce a handshape that represented the object appropriately. These adults instantly invented a gesture system with segmentation and combination but compared to children, their gestures were not systematically organised into a system of internal contrasts, that is, they did not have morphological structure. They also did not have combinatorial form. The unreliable and unsystematic nature of these gestures can be explained by the fact that these gestures accompany speech and do
36. Language emergence and creolisation not carry the full burden of communication as is the case with people using gestures primarily to communicate. From various studies gathered so far, we thus have good reasons to stick to the assumption that adults can learn from other adults and transmit variable input to children. However, the role of regularising the input can be ascribed to children only.
6. Creolisation and recreolisation revisited Following Chomsky’s (1986) discussion of I(nternalised)-language versus E(xternalised)-language, I propose ‘I-creolisation’ to refer to the process of nativisation that takes place in the absence of a conventional language model in contrast to ‘E-creolisation’, which refers to the process at the societal level. The converging evidence on the genesis of languages from the two groups is consistent with the view that creolisation is also at work in the development of sign languages. In this case, we refer to I-creolisation, given that this process takes place in the brain of the speakers concerned. As already mentioned, there are no reasons to focus on Ecreolisation in the case of sign languages. The latter type of creolisation refers to the sociolinguistic process that is intrinsic to the formation of creole languages. I have argued that the homesign context is comparable to the creole context in two ways, namely in terms of acquisition conditions and in terms of the role played by children in acquisition. Creolisation here is understood in a broader sense as a process of nativisation that takes place when input in language acquisition is incomplete. Both data on creole languages and sign languages illustrate the process of first language acquisition with either incomplete input (i.e. creole languages) or without input (i.e. homesigners). Under these circumstances, children either reorganise the variable input or invent a new system. This scenario differs from the acquisition of established languages in that variable input leads children to be creative. Several factors such as memory, age, and the lack or small amount of experience (e.g. in the form of exposure to literacy in their L1) play an important role in the linguistic behaviour of adults (cf. Kuhl (2000) and Petersson et al. (2000) for effects of literacy, especially the development of a formatted system, for phonological processing in literate and illiterate subjects). Parallel to creolisation, the process of recreolisation is worth mentioning. In the field of creole studies, the term was used in the seventies to refer to a phenomenon observed in the second generation of Jamaican speakers in Great Britain. According to Sebba (1997), these speakers ⫺ in contrast to the first generation of creole-speaking immigrants ⫺ apparently altered their creole to make it sound more like the one spoken in Jamaica. In this sense, the second generation of creole speakers recreolise their linguistic system through phonological and syntactic alterations. This view of recreolisation fits well with the concept of E-creolisation because of its societal implications. Using a computer model that analysed language change and its trajectory, Niyogi and Berwick (1995) convincingly showed that in a population of child learners, a small portion of this population fails to converge on pre-existing grammars. After exposure to a finite amount of data, some children converge on the pre-existing grammar, while others do not and consequently reach a different grammar. As a result, the second generation then becomes linguistically heterogeneous. The third generation of children
879
880
VII. Variation and change hears sentences produced by the second, and they, in turn, will attain a different set of grammars. Consequently, over successive generations, the linguistic composition becomes a dynamic system. A systematic look at Mauritian Creole in the seventies, nineties, and today shows a pattern of change that looks very much like the one predicted by Niyogi and Berwick (1995), namely a very dynamic but highly variable creole system (cf. also Syea 1993). Based on the evidence provided by Niyogi (2006), I propose that the changes seen in both creoles and sign languages can be adequately explained by I-recreolisation. However, this takes place in every generation of speakers, if and only if each generation of children does not have a conventional language model. The fact that generations of speakers do not become literate in their L1 (be it a creole or a sign language) contributes to the non-availability of conventional patterns in these two language groups. The acquisition of creole languages today is comparable to the acquisition of ASL by Simon (see section 5.3.) because both the child population and Simon are exposed to a highly variable, non-conventional language model. Taken together, these studies highlight the role of children in acquisition in the absence a conventional language model. It is exactly this condition that leads children to surpass their language models. In comparison, the first generation of creole speakers and children homesigners do invent language because of the non-availability of a language model.
7. Conclusion At the beginning of this chapter, I attempted to provide a definition of creolisation and related issues. Parallels between creole and sign languages have been established in terms of acquisition and input. My main goal has been to show that sign languages do share certain structural similarities with creole languages, some of which had already been observed in early studies on sign languages. Recent studies have shown that some of the similarities cannot be explained by the ‘age’ of these languages, but should rather be related to the acquisition conditions in the genesis of these languages. Based on a close examination and comparison of sign languages and creole languages, I have argued that there is solid evidence for the view that creolisation is also at work in the formation of sign languages, in spite of modality-specific characteristics of languages that may influence the process. The acquisition studies on creole and sign languages do shed light on the role of children in shaping language. Both cases of acquisition can be taken as cogent empirical evidence for the human ability, especially children’s ability, to invent language (homesigners and most probably the first generation of creole speakers), or surpass their language model (Simon and creole-acquiring children today). Several studies have shown that under particular circumstances, i.e. variable, unconventional input, children regularise and systematise their input. These findings converge with the view that children play a crucial role in language formation (Bickerton 1984, and subsequent work; Traugott 1977). Creolisation is then taken to refer to nativisation across modality. While it is plausible to argue that creolisation can also take place in the formation of sign languages, the recreolisation issue remains unclear at this stage. Interestingly,
36. Language emergence and creolisation continuing studies on sign languages might give us deeper insights into the process of recreolisation itself. At this stage, there are still many open questions. An agenda for future research should definitely address the following issues. In the light of what has been discussed in creole studies, the question arises whether recreolisation can also take place in sign languages. If yes, then we need to clarify whether it takes place in every single generation of speakers/signers. Other related issues are the links between creolisation, grammaticalisation, and language change. Furthermore, if there is creolisation and possibly recreolisation, can we expect decreolisation in the life cycle of a language? Both the theoretical analysis and the empirical findings substantiating the analysis in this paper should be regarded as a first step towards disentangling the complexities of creolisation across modality. Acknowledgements: I would like to thank Adam Schembri, Trude Schermer, Marlyse Baptista, and Susanne Fischer for remarks on previous versions of this chapter. I am very grateful to Roland Pfau for his insightful comments. Thank you also to Markus Steinbach and Timo Klein. All disclaimers apply.
8. Literature Adone, Dany 1994 The Acquisition of Mauritian Creole. Amsterdam: Benjamins. Adone, Dany 1997 The Acquisition of Ngukurr Kriol as a First Language. A.I.A.T.S.I.S. Project Report. Darwin/Canberra, Australia. Adone, Dany 2001a Morphology in Two Indian Ocean Creoles. Paper Presented at the Meeting of the Society of Pidgin and Creole Languages. University of Columbia, Portugal. Adone, Dany 2001b A Cognitive Theory of Creole Genesis. Habilitation Thesis. Heinrich-Heine Universität, Düsseldorf. Adone, Dany 2003 Reduplication in Creole and Sign Languages. Paper Presented at the Meeting of the Society of Pidgin and Creole Languages. University of Manoa, Hawaii. Adone, Dany 2005 The Case of Mauritian Home Sign. In: Brugos, Alejna/Clark-Cotton, Manuella/Ha, Seungwan (eds.), Proceedings of the 29 th Annual Boston University Conference on Language Development. Somerville, MA: Cascadilla Press, 12⫺23. Adone, Dany 2007 From Gestures to Mauritian Sign Language. Paper Presented at the Current Issues in Sign Language Research Conference. University of Cologne, Germany. Adone, Dany 2008a From Gesture to Sign. The Leap to Language. Manuscript, University of Cologne. Adone, Dany 2008b Looking at Yolngu Sign Language a Decade Later. A Ministudy on Language Change. Manuscript, University of Cologne. Adone, Dany 2009 Grammaticalisation and Creolisation: The Case of Ngukurr Kriol. Paper Presented at the Meeting of the Society of Pidgin and Creole Languages. San Francisco.
881
882
VII. Variation and change Adone, Dany forthcoming The Acquisition of Creole Languages. Cambridge: Cambridge University Press. Adone, Dany/Maypilama, E. Lawurrpa 2012 Yolngu Sign Language from a Sociolinguistics Perspective. Manuscript, Charles Darwin University. Adone, Dany/Vainikka, Anne 1999 Acquisition of Wh-questions in Mauritian Creole. In: DeGraff, Michel (ed.), Language Creation and Language Change: Creolization, Diachrony, and Development. Cambridge, MA: MIT Press, 75⫺94. Aikhenvald, Alexandra Y. 2006 Serial Verb Constructions in Typological Perspective. In: Aikhenvald, Alexandra/ Dixon, Robert M.W. (eds.), Serial Verb Constructions. A Cross-linguistic Typology. Oxford: Oxford University Press, 1⫺68. Andersen, Roger (ed.) 1983 Pidginization and Creolization as Language Acquisition. Rowley, MA: Newbury House. Ansaldo, Umberto/Matthews, Stephen/Lim, Lisa (eds.) 2007 Deconstructing Creole. Amsterdam: Benjamins. Arbib, A. Michael/Bickerton, Derek 2010 Preface. In: Arbib, A. Michael/Bickerton, Derek (eds.), The Emergence of Protolanguage. Holophrasis vs. Compositionality. Amsterdam: Benjamins, vii⫺xi. Arends, Jacques 1993 Towards a Gradualist Model of Creolization. In: Byrne, Francis/Holm, John (eds.), Atlantic Meets Pacific. Amsterdam: Benjamins, 371⫺380. Arends, Jacques 1994 The African-born Slave Child and Creolization (a Post-script to the Bickerton/Singler Debate on Nativization). In: Journal of Pidgin and Creole Languages 9, 115⫺119. Arends, Jacques/Muysken, Pieter/Smith, Norval (eds.) 1995 Pidgins and Creoles: An Introduction. Amsterdam: Benjamins. Aronoff, Mark/Meir, Irit/Sandler, Wendy 2005 The Paradox of Sign Language Morphology. In: Language 81(2), 301⫺344. Aronoff, Mark/Meir, Irit/Padden, Carol/Sandler, Wendy 2010 The Roots of Linguistic Organization in a New Language. In: Arbib, A. Michael/Bickerton, Derek (eds.), The Emergence of Protolanguage. Holophrasis vs. Compositionality. Amsterdam: Benjamins, 133⫺152. Baker, Philip/Fon Sing, Guillaume (eds.) 2007 The Making of Mauritian Creole. London: Battlebridge. Bakker, Peter/Parkvall, Mikael 2005 Reduplication in Pidgins and Creoles. In: Hurch, Bernhard (ed.), Studies on Reduplication. Berlin: Mouton de Gruyter, 511⫺553. Baptista, Marlyse 2002 The Syntax of Cape Verdean Creole: the Sotavento Varieties. Amsterdam: Benjamins. Bickerton, Derek 1974 Creolisation, Linguistic Universals, Natural Semantax and the Brain. In: University of Hawaii Working Papers in Linguistics 6, 125⫺141. Bickerton, Derek 1977 Pidginization and Creolization: Language Acquisition and Language Universals. In: Valdmann, Albert (ed.), Pidgin and Creole Linguistics. Bloomington: Indiana University Press, 46⫺69. Bickerton, Derek 1981 Roots of Language. Ann Arbor, MI: Karoma Publishers. Bickerton, Derek 1984 The Language Bioprogram Hypothesis. In: The Behavioral and Brain Sciences 7, 173⫺221.
36. Language emergence and creolisation Bickerton, Derek 1990 Language and Species. Chicago: University of Chicago Press. Bickerton, Derek 1995 Language and Human Behavior. London: UCL Press. Birdsong, David 2005 Interpreting Age Effects in Second Language Acquisition. In: Kroll, Judith F./De Groot, Annette M.B. (eds.), Handbook of Bilingualism. Psycholinguistic Approaches. Oxford: Oxford University Press, 109⫺127. Bollée, Annegret 1977 Le Creole Français des Seychelles. Tübingen: Niemeyer. Bollée, Annegret 1982 Die Rolle der Konvergenz bei der Kreolisierung. In: Ureland, Sture Per (ed.), Die Leistung der Strataforschung und der Kreolistik: Typologische Aspekte der Sprachforschung. Tübingen: Niemeyer, 391⫺405. Bollée, Annegret 2007 Im Gespräch mit Annegret Bollée. In: Reutner, Ursula (ed.), Annegret Bollée: Beiträge zur Kreolistik. Hamburg: Helmut Buske, 189⫺215. Bos, Heleen F. 1996 Serial Verb Constructions in Sign Language of the Netherlands. Manuscript, University of Amsterdam. Bruyn, Adrienne/Muysken, Pieter/Verrips, Maaike 1999 Double-object Constructions in the Creole Languages: Development and Acquisition. In: DeGraff, Michel (ed.), Language Creation and Language Change: Creolization, Diachrony, and Development. Cambridge, MA: MIT Press, 329⫺373. Caplan, David/Waters, Gloria S. 1999 Verbal Working Memory and Sentence Comprehension. In: The Behavioral and Brain Sciences 1(1), 77⫺94. Chaudenson, Robert 1992 Des Îles, des Hommes, des Langues: Essai sur Créolisation Linguistique et Culturelle. Paris: L’Harmattan. Chomsky, Noam 1986 Knowledge of Language: Its Nature, Origin and Use. New York: Praeger. Cooke, Michael/Adone, Dany 1994 Yolngu Signing ⫺ Gestures or Language? In: CALL Working Papers, Centre for Aboriginal Languages and Linguistics, Batchelor College, Northern Territory, Australia, 1⫺15. Coppola, Marie/So, Wing Chee 2005 Abstract and Object-anchored Deixis: Pointing and Spatial Layout in Adult Homesign Systems in Nicaragua. In: Brugos, Alejna/Clark-Cotton, Manuella/Ha, Seungwan (eds.), Proceedings of the 29 th Annual Boston University Conference on Language Development. Somerville, MA: Cascadilla Press, 144⫺155. Corne, Chris 1999 From French to Creole. The Development of New Vernaculars in the French Colonial World. London: Battlebridge. Curtiss, Susan 1977 Genie: a Psycholinguistic Study of a Modern-day “Wild Child”. New York: Academic Press. DeGraff, Michel 1999 Creolization, Language Change, and Language Acquisition: A Prolegomenon. In: DeGraff, Michel (ed.), Language Creation and Language Change: Creolization, Diachrony, and Development. Cambridge, MA: MIT Press, 1⫺46.
883
884
VII. Variation and change DeGraff, Michel 2003 Against Creole Exceptionalism. In: Language 79(2), 391⫺410. Deuchar, Margaret 1987 Sign Languages as Creoles and Chomsky’s Notion of Universal Grammar. In: Modgil, Sohan/Modgil, Celia (eds.), Noam Chomsky: Consensus and Controversy. New York: Falmer, 81⫺91. Fischer, Susan D. 1973 Two Processes of Reduplication in the American Sign Language. In: Foundations of Language 9, 469⫺480. Fischer, Susan D. 1978 Sign Language and Creoles. In: Siple, Patricia (ed.), Understanding Language through Sign Language Research. New York: Academic Press, 309⫺331. Fischer, Susan D. 1996 By the Numbers: Language-internal Evidence for Creolization. In: Edmondson, William/Wilbur, Ronnie B. (eds.), International Review of Sign Linguistics. Hillsdale, NJ: Lawrence Erlbaum, 1⫺22. Florigny, Guilhem 2010 Acquisition du Kreol Mauricien et du Français. PhD Dissertation, University of Paris Ouest Nanterre la Defense. Frensch, Peter A./Lindenberger, Ulman/Kray, Jutta 1999 Imposing Structure on an Unstructured Environment: Ontogenetic Changes in the Ability to Form Rules of Behavior under Condition of Low Environmental Predictability. In: Friederici, Angela D./Menzel, Randolf (eds.), Learning: Rule Extraction and Representation. Berlin: Mouton de Gruyter, 139⫺158. Gébert, Alain/Adone, Dany 2006 A Dictionary and Grammar of Mauritian Sign Language, Vol. 1. Vacoas, République de Maurice: Editions Le Printemps. Gee, James Paul/Goodhart, Wendy 1988 American Sign Language and the Human Biological Capacity for Language. In: Strong, Michael (ed.), Language Learning and Deafness. Cambridge: Cambridge University Press, 49⫺79. Goldin-Meadow, Susan 2003 The Resilience of Language. What Gesture Creation in Deaf Children Can Tell Us About How All Children Learn Language. New York: Psychology Press. Goldin-Meadow, Susan/McNeill, David/Singleton, Jenny 1996 Silence is Liberating: Removing the Handcuffs on Grammatical Expression in the Manual Modality. In: Psychological Review 103, 34⫺55. Goldowsky, Bois N./Newport, Elissa L. 1993 Modelling the Effects of Processing Limitations on the Acquisition of Morphology: The Less is More Hypothesis. In: Clark, Eve (ed.), Proceedings of the 24 th Annual Child Language Research Forum. Stanford, CA, 124⫺138. Gullberg, Marianne/Indefrey, Peter (eds.) 2006 The Cognitive Neuroscience of Second Language Acquisition. Oxford: Blackwell. Hall, Robert 1966 Pidgin and Creole Languages. Ithaca, NY: Cornell University Press. Hauser, Marc D./Fitch, W. Tecumseh 2003 What Are the Uniquely Human Components of the Language Faculty? In: Christiansen, Morten H./Kirby, Simon (eds.), Language Evolution. Oxford: Oxford University Press, 158⫺181. Hudson Kam, Carla/Newport, Elissa, L. 2005 Regularizing Unpredictable Variation: The Roles of Adult and Child Learners in Language Formation and Change. In: Language Learning and Development 1(2), 151⫺195.
36. Language emergence and creolisation Hultsch, Henrike/Mundry, Roger/Todt, Dietmar 1999 Learning, Representation and Retrieval of Rule-related Knowledge in the Song System of Birds. In: Friederici, Angela D./Menzel, Randolf (eds.), Learning: Rule Extraction and Representation. Berlin: Mouton de Gruyter, 89⫺115. Hymes, Dell H. 1971 Pidginization and Creolization of Languages. Cambridge: Cambridge University Press. Ingram, Robert M. 1978 Theme, Rheme, Topic, and Comment in the Syntax of American Sign Language. In: Sign Language Studies 20, 193⫺218. Kegl, Judy 2002 Language Emergence in a Language-ready Brain: Acquisition. In: Morgan, Gary/Woll, Bencie (eds.), Directions in Sign Language Acquisition. Amsterdam: Benjamins, 207⫺ 254. Kegl, Judy/Senghas, Ann/Coppola, Marie 1999 Creation through Contact: Sign Language Emergence and Sign Language Change in Nicaragua. In: DeGraff, Michel (ed.), Language Creation and Language Change: Creolization, Diachrony, and Development. Cambridge, MA: MIT Press, 179⫺237. Kendon, Adam 1988 Sign Languages of Aboriginal Australia. Cultural, Semiotic and Communicative Perspectives. Cambridge: Cambridge University Press. Klima, Edward/Bellugi, Ursula 1979 The Signs of Language. Cambridge, MA: Harvard University Press. Kuhl, Patricia K. 2000 Language, Mind, and Brain: Experience Alters Perception. In: Gazzaniga, Michael S. (ed.), The New Cognitive Neurosciences. Cambridge, MA: MIT Press, 99⫺115. Lefebvre, Claire 1986 Relexification in Creole Genesis Revisited: The Case of Haitian Creole. In: Muysken, Pieter/Smith, Norval (eds.), Substrata Versus Universals in Creole Genesis. Amsterdam: Benjamins, 279⫺300. Lefebvre, Claire 1991 Take Serial Verb Constructions in Fon. In: Lefebvre, Claire (ed.), Serial Verbs: Grammatical, Comparative and Cognitive Approaches. Amsterdam: Benjamins, 37⫺78. Lefebvre, Claire 1998 Creole Genesis and the Acquisition of Grammar: The Case of Haitian Creole. Cambridge: Cambridge University Press. Lumsden, John 1999 Language Acquisition and Creolization. In: DeGraff, Michel (ed.), Language Creation and Language Change: Creolization, Diachrony, and Development. Cambridge, MA: MIT Press, 129⫺157. Maypilama, E. Lawurrpa/Adone, Dany 2012 Bimodal Bilingualism in the Top End. Manuscript, Charles Darwin University. McWhorter, John 1997 Towards a New Model of Creole Genesis. New York: Peter Lang. McWhorter, John 2001 The World’s Simplest Grammars Are Creole Grammars. In: Linguistic Typology 5(2/ 3), 125⫺166. Meier, Richard P. 1984 Sign as Creole. In: The Behavioural and Brain Sciences 7, 201⫺202. Meir, Irit/Sandler, Wendy/Padden, Carol/Aronoff, Mark 2010a Emerging Sign Languages. In: Marschark, Mark/Spencer, Patricia E. (eds.), Oxford Handbook of Deaf Studies, Language, and Education, Volume 2. Oxford: Oxford University Press, 267⫺280.
885
886
VII. Variation and change Meir, Irit/Aronoff, Mark/Sandler, Wendy/Padden, Carol 2010b Sign Languages and Compounding. In: Scalise, Sergio/Vogel, Irene (eds.), Cross-disciplinary Issues in Compounding. Amsterdam: Benjamins, 301⫺322. Mühlhäusler, Peter 1986 Pidgin and Creole Linguistics. Oxford: Blackwell. Mufwene, Salikoko 1996 Creolisation and Grammaticalization: What Creolistics Could Contribute to Grammaticalization. In: Baker, Philip/Syea, Anand (eds.), Changing Meanings, Changing Functions. Papers Relating to Grammaticalisation in Contact Languages. London: University of Westminster Press. Mufwene, Salikoko 1999 On the Language Bioprogram Hypothesis: Hints from Tazie. In: DeGraff, Michel (ed.), Language Creation and Language Change: Creolization, Diachrony, and Development. Cambridge, MA: MIT Press, 95⫺127. Mufwene, Salikoko 2000 Creolization Is a Social, Not a Structural, Process. In: Neumann-Holzschuh, Ingrid/ Schneider, Edgar (eds.), Degrees of Restructuring in Creole Languages. Amsterdam: Benjamins, 65⫺84. Muysken, Pieter 1988 Are Creoles a Special Type of Language? In: Newmeyer, Frederick J. (ed.), Linguistics: The Cambridge Survey. Vol. II: Linguistic Theory: Extensions and Implications. Cambridge: Cambridge University Press, 285⫺301. Muysken, Pieter/Veenstra, Tonjes 1995 Serial Verbs. In: Arends, Jacques/Muysken, Pieter/Smith, Norval (eds.), Pidgins and Creoles: An Introduction. Amsterdam: Benjamins, 289⫺301. Newport, Elissa L. 1988 Constraints on Learning and Their Role in Language Acquisition. In: Language Sciences 10, 147⫺172. Newport, Elissa L. 1990 Maturational Constraints on Language Learning. In: Cognitive Science 14, 11⫺28. Newport, Elissa L. 1999 Reduced Input in the Acquisition of Sign Languages: Contributions to the Study of Creolisation. In: DeGraff, Michel (ed.), Language Creation and Language Change: Creolization, Diachrony, and Development. Cambridge, MA: MIT Press, 161⫺178. Niyogi, Partha 2006 The Computational Nature of Language Learning and Evolution. Cambridge: MIT Press. Niyogi, Partha/Berwick, Robert C. 1995 The Logical Problem of Language Change. Cambridge, MA: MIT Memo No. 1516. Nyst, Victoria 2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, University of Amsterdam. Utrecht: LOT. Padden, Carol/Meir, Irit/Aronoff, Mark/Sandler, Wendy 2010 The Grammar of Space in Two New Sign Languages. In: Brentari, Diane (ed.), Sign Languages (Cambridge Language Surveys). Cambridge. Cambridge University Press, 570⫺592. Petersson, Karl M./Reis, Alexandra/Askelöf, Simon/Castro-Caldas, Alexandre/Ingvar, Martin 2000 Language Processing Modulated by Literacy: A Network Analysis of Verbal Repetition in Literate and Illiterate Subjects. In: Journal of Cognitive Neuroscience 12(3), 364⫺382. Pfau, Roland/Steinbach, Markus 2006 Pluralization in Sign and in Speech: A Cross-modal Typological Study. In: Linguistic Typology 10, 135⫺182.
36. Language emergence and creolisation Pinker, Steven 1999 Words and Rules: Ingredients of Language. New York: Harper Collins. Plag, Ingo 1993 Sentential Complementation in Sranan: On the Formation of an English-based Creole Language. Tübingen: Niemeyer. Plag, Ingo 1998 On the Role of Grammaticalization in Creolization. In: Gilbert, Glenn (ed.), Pidgin and Creole Languages in the 21st Century. New York: Peter Lang. Rathmann, Christian 2005 Event Structure in American Sign Language. PhD Dissertation, University of Texas at Austin. Roberts, Julian 1995 Pidgin Hawaiian: a Sociohistorical Study. In: Journal of Pidgin and Creole Languages 10, 1⫺56. Roeper, Tom 2007 The Prism of Grammar. How Child Language Illuminates Humanism. Cambridge, MA: MIT Press. Ross, Danielle S./Newport, Elissa L. 1996 The Development of Language from Non-native Linguistic Input. In: Stringfellow, Andy/Cahana-Amitay, Dalia/Hughes, Elizabeth/Zukowski, Andrea (eds.), Proceedings of the 20 th Annual Boston University Conference on Languages Development, 634⫺645. Safran, Jenny R./Aslin, Richard N./Newport, Elissa L. 1996 Statistical Learning by 8-month-old Infants. In: Science 274, 1926⫺1928. Salthouse, Timothy A. 1991 Theoretical Perspectives on Cognitive Aging. Hillsdale, NJ: Lawrence Erlbaum. Sandler, Wendy/Lillo-Martin, Diane 2006 Sign Language and Linguistic Universals. Cambridge: Cambridge University Press. Sandler, Wendy/Meir, Irit/Padden, Carol/Aronoff, Mark 2005 The Emergence of Grammar: Systematic Structure in a New Language. In: Proceedings of the National Academy of Sciences 102(7), 2661⫺2665. Sankoff, Gillian 1979 The Genesis of a Language. In: Hill, Kenneth (ed.), The Genesis of Language. Ann Arbor: Karoma Press, 23⫺47. Sebba, Mark 1997 Contact Languages. Pidgins and Creoles. New York: St Martin’s Press. Senghas, Ann 1995 Children’s Contribution to the Birth of Nicaraguan Sign Language. PhD Dissertation, Cambridge, MA, MIT. Senghas, Ann 2000 The Development of Early Spatial Morphology in Nicaraguan Sign Language. In: Howell, Catherine/Fish, Sarah/Keith-Lucas, Thea (eds.), Proceedings of the 24 th Annual Boston University Conference on Language Development. Somerville, MA: Cascadilla Press, 696⫺707. Senghas, Ann/Coppola, Marie/Newport, Elissa L./Supalla, Ted 1997 Argument Structure in Nicaraguan Sign Language: The Emergence of Grammatical Devices. In: Hughes, Elizabeth/Hughes, Mary/Greenhill, Annabel (eds.), Proceedings of the 21st Annual Boston University Conference on Language Development. Somerville, MA: Cascadilla Press, 550⫺561. Senghas, Richard J./Kegl, Judy/Senghas, Ann 1997 Creation through Contact: the Development of a Nicaraguan Deaf Community. Paper Presented at the Second International Conference on Deaf History. University of Hamburg.
887
888
VII. Variation and change Siegel, Jeff 1999 Transfer Constraints and Substrate Influence in Melanesian Pidgin. In: Journal of Pidgin and Creole Languages 14, 1⫺44. Singler, John 1992 Nativization and Pidgin/Creole Genesis: A Reply to Bickerton. In: Journal of Pidgin and Creole Languages 7, 319⫺333. Singler, John 1993 African Influence Upon Afro-American Language Varieties: A Consideration of Sociohistorical Factors. In: Mufwene, Salikoko (ed.), Africanisms in Afro-American Language Varieties. Athens, GA: University of Georgia Press, 235⫺253. Singler, John 1996 Theories of Creole Genesis, Sociohistorical Considerations, and the Evaluation of Evidence: The Case of Haitian Creole and the Relexification Hypothesis. In: Journal of Pidgin and Creole Languages 11, 185⫺230. Singleton, Jenny L. 1989 Restructuring of Language from Impoverished Input: Evidence for Linguistic Compensation. PhD Dissertation, University of Illinois at Urbana-Champaign. Singleton, Jenny L./Newport, Elissa L. 2004 When Learners Surpass Their Models: The Acquisition of American Sign Language from Inconsistent Input. In: Cognitive Psychology 49, 370⫺407. Slobin, Dan I. 1985 Cross-linguistic Evidence for the Language-making Capacity. In: Slobin, Dan I. (ed.), The Cross-Linguistic Study of Language Acquisition. Vol. 2: Theoretical Issues. Hillsdale, NJ: Lawrence Erlbaum, 1157⫺1256. Supalla, Ted 1990 Serial Verbs of Motion in ASL. In: Fischer, Susan/Siple, Patricia (eds.), Theoretical Issues in Sign Language Research. Vol. 1: Linguistics. Chicago: University of Chicago Press, 172⫺152. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language. An Introduction. Cambridge: Cambridge University Press. Syea, Anand 1993 Null Subjects in Mauritian Creole and the Pro-drop Parameter. In: Byrne, Francis/ Holm, John (eds.), Atlantic Meets Pacific. Amsterdam: Benjamins, 91⫺102. Tenenbaum, Joshua/Xu, Fei 2005 Word Learning as Bayesian Inference: Evidence from Preschoolers. In: Proceedings of the 27 th Annual Conference of the Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum, 2381⫺2386. Thomason, Sarah/Kaufman, Terrence 1988 Language Contact, Creolization, and Genetic Linguistics. Berkeley/Los Angeles: University of California Press. Todd, Loreto 1990 Pidgins and Creoles. London: Routledge. Traugott, Elisabeth 1977 Pidginization, Creolization, and Language Change. In: Valdman, Albert (ed.), Pidgin and Creole Linguistics. Bloomington, IN: Indiana University Press, 70⫺98. Veenstra, Tonjes 1996 Serial Verbs in Saramaccan: Predication and Creole Genesis. PhD Dissertation, University of Amsterdam. The Hague: HAG. Veenstra, Tonjes 2003 What Verbal Morphology Can Tell Us About Creole Genesis: The Case of Frenchrelated Creoles. In: Plag, Ingo/Lappe, Sabine (eds.), The Phonology and Morphology of Creole Languages. Tübingen: Niemeyer, 293⫺314.
37. Language planning
889
Woodward, James 1978 Historical Bases of ASL. In: Siple, Patricia (ed.), Understanding Language through Sign Language Research. New York: Academic Press, 333⫺348.
Dany Adone, Cologne (Germany)
37. Language planning 1. 2. 3. 4. 5. 6. 7. 8. 9.
Introduction Language planning Status planning: Recognition of sign languages Corpus planning A case study: Standardisation of Sign Language of the Netherlands (NGT) Lexical modernisation Acquisition planning Conclusion Literature
Abstract In this chapter, three aspects of language planning will described for sign languages: status planning, corpus planning, and acquisition planning. As for status planning, in most countries the focus of attention is usually on the legal recognition of the national sign language. Corpus planning shall be discussed in relation to standardisation and lexical modernisation, followed by a short discussion of acquisition planning. Standardisation of languages in general is a controversial issue. There are only few examples of efforts to standardise a sign language. The process of standardisation of the lexicon of Sign Language of the Netherlands will be discussed as an example of a specific form of standardisation, informed by thorough knowledge of the lexical variation existing in the language.
1. Introduction In this chapter, selected aspects of sign language politics will be discussed. In describing issues related to the use and status of a language, various terms have been used in the literature: language politics, language policy, and language planning. These terms require some clarification. The term “language planning”, introduced by the AmericanNorwegian linguist Einar Haugen in his 1968 article about modern Norwegian, describes “an activity of preparing a normative orthography, grammar, and dictionary for
890
VII. Variation and change the guidance of writers and speakers in a non-homogeneous speech community” (Haugen 1968, 673). In the late 1960s and early 1970s, the scientific interest in language planning mainly applied to a third world context where the establishment of one standardized national language was regarded necessary ⫺ from a Western European perspective. Language planning tended to be considered as an activity which has as its main goal to solve problems and to establish changes. Two decades after Haugen introduced his definition of “language planning”, the sociolinguist Robert L. Cooper proposed an alternative definition which was somewhat less oriented towards problem solving: “language planning refers to deliberate efforts to influence the behaviour of others with respect to the acquisition, structure, or functional allocation of their codes” (Cooper 1989, 45). In the meantime, others had also contributed to the definition of language planning by questioning the advisability of language planning: “It can be done, but should it been done?” (Fishman 1983). The relationship between language politics, language policy, and language planning may be described in the following way: from certain language politics, a certain language policy will follow, which will be implemented through some type of language planning. In other words: language politics refers to the why, language policy to the what, and language planning to the how. A policy is a deliberate plan of action to guide decisions and achieve rational outcome(s). The term may apply to government, private sector organizations and groups, and individuals. Policy differs from rules or law. While law can compel or prohibit behaviours, policy merely guides actions toward those that are most likely to yield a desired outcome. However, policy may also refer to the process of making important organizational decisions, including the identification of different alternatives and choosing among them on the basis of the impact they will have. Policies can also be understood as political, management, financial, and administrative mechanisms arranged to reach explicit goals. Since policy refers to both a plan of action and the process of making a decision, the term may be a little confusing. Therefore, in this chapter, the term ‘language planning’ will be used, referring to those political and other opinions and measures that focus on the regulation or improvement of the use and/or status of a language. In this chapter, the relevant aspects of language planning, as mentioned above, will be discussed with respect to sign languages. It is important to stress the fact that for most languages, but certainly for most sign languages, language planning is not formally and rationally conducted by some central authority. As Cooper (1989, 41) states: “In reality, language planning rarely conforms to this ideal and more often than not language planning is a messy affair, ad hoc, haphazard, and emotionally driven”. Moreover, although language planning activities may be conducted by a wide range of institutions ⫺ apart from language academies, governments, and ministries of education ⫺ pressure groups and individuals play a crucial role in the process of sign language planning activities in various countries. In section 2, we will address some general aspects of language planning. The discussion of status planning in section 3 comprises perspectives on deafness (section 3.1) and legal recognition of sign languages (section 3.2). Sections 4 to 6 focus on different aspects of corpus planning, namely standardisation (section 4.1), codification of the language (section 4.2), a case study of standardisation (section 5), and lexical modernisation (section 6). Acquisition planning will be discussed in section 7.
37. Language planning
2. Language planning Language planning can be divided into three subtypes: status planning, corpus planning, and acquisition or educational planning. Status planning refers to all efforts undertaken to change the use and function of a language (or language variety). Deumert (2001) states that examples of status planning are matters such as: ⫺ recognition (or not) of a language as an official language; ⫺ multilingualism in situations where more than one language is the national language (for example, Flemish and French in Belgium). Corpus planning is concerned with the internal structure of a language such as the prescriptive intervention in the forms of a language. According to Deumert (2001), corpus planning is often related to matters such as: ⫺ reform or introduction of a written system (spelling system; e.g., the switch from the Arabic to the Latin writing system in Turkey during the reign of Atatürk); ⫺ standardisation (a codified form) of a certain language or language variety involving the preparation of a normative orthography, grammar, and dictionary; ⫺ lexical modernisation of a language (for example, Hebrew and Hausa). Acquisition planning concerns the teaching and learning of languages. Acquisition planning in spoken languages is often supported and promoted by national institutions such as the Dante Institute (Italian), the Goethe Institute (German), Maison Descartes (French), etc. Comparable organisations that are concerned with teaching and learning of sign languages are often run by international organisations of the Deaf (e.g. the World Federation of the Deaf), by national organisations of the Deaf, by universities (e.g. Stockholm University; the Deafness, Cognition and Language (DCAL) Research Centre at University College in London; Gallaudet University in Washington, DC), or by national sign language centres, such as the Centre for Sign Language and Sign Supported Communication (KC) in Denmark, the Institute for German Sign Language in Hamburg, the CNR in Rome, and the Dutch Sign Centre in the Netherlands. Moreover, many individual researchers all over the world have contributed in significant ways to the development and spread of their national sign languages. Status planning, corpus planning, and acquisition planning have all played an important role with respect to sign languages around the globe and will be discussed in the next paragraphs.
3. Status planning: Recognition of sign languages Since the early days of sign language research in the middle of the 20th century, status planning and, more specifically, the recognition of a language as a fully-fledged language has been a major issue. The status of a sign language depends on the status of deaf people, the historical background, and the role a language plays within deaf education. The history of sign language research is thus closely related to the history of
891
892
VII. Variation and change deaf education and the perspectives on deaf people. Therefore, before turning to the recognition of sign languages in section 3.2, two different views on deafness and deaf people will first be introduced in the next section.
3.1. Perspectives on deafness: Deficit or linguistic minority For centuries, deafness has been viewed as a deficit. This, often medical, perspective focuses on the fact that deaf people cannot hear (well). From this perspective, deaf people have a problem that needs to be fixed as quickly as possible in order for them to integrate properly and fully in the hearing society. From the perspective of the hearing majority, deaf people are different and need to assimilate and adapt. Great emphasis is therefore put on technological aids, ranging from hearing aids to cochlear implants (CI). With each new technology that becomes available, the hope to finally cure deafness increases. Within this mostly hearing perspective there is no room for Deaf identity or Deaf culture: deaf people are just hearing people who cannot hear (Lane 2002). This perspective on deafness has had and still has a tremendous impact on the lives of deaf people throughout the world (for an overview, see Monaghan et al. (2003) and Ladd (2003)). The status of sign languages in Western societies varies throughout history. In some periods, sign languages were used in some way or the other in deaf education (for instance, in 18th century Paris); at other times, sign languages were banned from deaf education altogether (from 1880⫺1980 in most Western societies; see chapter 38, History of Sign Languages and Sign Language Linguistics, for details on the history of deaf education). The first study that applied the principles of spoken language linguistics to a sign language (American Sign Language, ASL) was William Stokoe’s monograph Sign Language Structure (Stokoe 1960). This study as well as subsequent, by now ‘classic’, studies on ASL by American linguists such as Edward Klima and Ursula Bellugi (Klima/ Bellugi 1979) and Charlotte Baker and Dennis Cokely (Baker/Cokely 1980) gave the impetus to another perspective on deafness and deaf people: if sign languages are natural languages, then their users belong to a linguistic minority. Consequently, deaf people are not hearing people with a deficit; they are people who are different from hearing people. They may not have access to a spoken language, but they do have access to a visual language which can be acquired in a natural way, comparable to the acquisition of a spoken language (see chapter 28, Acquisition). Under this view, then, deaf people form a Deaf community with its own language, identity, and culture. Still, the Deaf minorities that make up Deaf communities are not a homogenous group. Paddy Ladd writes: It is also important to note that within Western societies where there is significant migration, or within linguistic minorities inside a single nation-state, there are Deaf people who are in effect, minorities within minorities. Given the oralist hegemony, most of these Deaf people have been cut off not only from mainstream culture, but also from their own ‘native’ cultures, a form of double oppression immensely damaging to them even without factoring oppression from Deaf communities themselves. (Ladd 2003, 59)
37. Language planning Furthermore, there is an important difference between Deaf communities and other language minorities. Sign languages are passed on from one generation to the next only to a very limited extent. The main reason for this is that more than 95 % of deaf people have hearing parents for whom a sign language is not a native language. Therefore, most deaf people have learned their sign language from deaf peers, from deaf adults outside of the family, or from parents who have acquired a sign language as a second language. It has been pointed out that ⫺ contrary to what many believe ⫺ linguistic analyses and scientific descriptions of sign language did exist in the Unites States as early as the 19th century, and that deaf educators did have access to literature related to the role, use, and structure of sign language (Nover 2000). However, these studies never had an impact comparable to that of the early linguistic studies on the structure of ASL mentioned above, which gave a major impulse to linguistic research. In many countries, this legitimization of signing also led to major changes in deaf education policies and to the emancipation of deaf people. It seems that the timing of Stokoe’s analysis was perfect: oral methods in Western deaf education had failed dramatically, deaf people did not integrate into the hearing society, and the reading skills of deaf school leavers did not reach beyond those of nine year old hearing children (Conrad 1979). Furthermore, around the same time, language acquisition studies stressed the importance of early mother-child interaction for adequate language development. In several parts of the world, the awareness of the importance of their natural language for deaf people increased. The first European conference on sign language research, held in Sweden in 1979 (Ahlgren/Bergman 1980), inspired other researchers to initiate research establishing the existence of distinct sign languages in many different European countries. In 1981, Sweden was the first country in the world to recognise its national sign language, Swedish Sign Language (SSL), as a language by making it mandatory in deaf education. The legislation followed a 1977 home language reform measure allowing minority and immigrant children to receive instruction in their native language (Monaghan 2003, 15).
3.2. Legal recognition of sign languages For a very long time, sign languages have been ignored and as a consequence, their potential has been underestimated. In areas where deaf people are not allowed to use their own natural language in all functions of society, their sign language clearly has a minority status which is closely related to the status of its users. However, being a minority does not always automatically generate a minority status for the respective sign language. There are examples of communities in which the hearing majority used or still uses the sign language of the deaf minority as a lingua franca: for instance, Martha’s Vineyard (Groce 1985), the village of Desa Kolok in Bali (Branson/Miller/ Marsaja 1996), and the village of Adamorobe in Ghana (Nyst 2007) (see chapter 24, Shared Sign Languages, for discussion). The status of sign languages depends very much on the legal recognition of these languages ⫺ especially from the point of view of Deaf communities and Deaf organisations ⫺ and has been one of the most important issues in various countries since 1981. Most of the activities centred around the topic of sign language recognition and bilin-
893
894
VII. Variation and change gual education, which is quite understandable given the history of deaf education and the fact that deaf people have been in a dependent and mostly powerless position for centuries. Legal recognition may give the power of control, that is, the right of language choice, back to those who should choose, who should be in control: the deaf people themselves. A word of caution though is necessary here and is adequately formulated by Verena Krausneker: Recognition of a Sign Language will not solve all problems of its users at once and maybe not even in the near future. But legal recognition of Sign Languages will secure the social and legal space for its users to stop the tiresome work of constant self-defence and start creative self-defined processes and developments. Legal recognition of a language will give a minority space to think and desire a plan and achieve the many other things its members think they need or want. Basic security in the form of language rights will influence educational and other most relevant practices deeply. (Krausneker 2003, 11)
The legal status of sign languages differs from country to country. There is no standard way in which such recognition can be formally or legally extended: every country has its own interpretation. In some countries, the national sign language is an official state language, whereas in others, it has a protected status in certain areas, such as education. Australian Sign Language (Auslan), for example, was recognised by the Australian Government as a “community language other than English” and as the preferred language of the Deaf community in policy statements in 1987 and 1991. This recognition, however, does not ensure any structural provision of services in Auslan. Another example of legal recognition is Spain. Full legal recognition of sign languages in Spain has only been granted in 2007, when a Spanish State law concerning sign languages was passed. However, several autonomous regional governments had already passed bills during the 1990s that indirectly recognized the status of sign language and aimed at promoting accessibility in Spanish Sign Language (LSE) in different areas, featuring education as one of the central ones. It should be pointed out that legal recognition is not equivalent to official status because the Spanish Constitution from 1978 only grants official status to four spoken languages (Spanish, Catalan, Galician, and Basque). The new Catalan Autonomy Law from 2006 includes the right to use Catalan Sign Language (LSC) and promotes its teaching and protection. The Catalan Parliament had already passed a non-binding bill in 1994 promoting the use of LSC in the Catalan education system and research into the language (Josep Quer, personal communication). The situation with respect to legal recognition can be summarised as follows (Wheatley/Pabsch 2010; Krausneker 2008): ⫺ Ten countries have recognised their national sign languages in constitutional laws: Austria (2005), the Czech Republic (1998), Ecuador (1998), Finland (1995), Iceland (2011), New Zealand (2006), Portugal (1998), South Africa (1996), Uganda (1995), and Venezuela (1999). ⫺ In the following 32 countries, the national sign languages have legal status through other laws: Australia, Belgium (FI), Brazil, Byelorussia, Canada, China, Columbia, Cyprus, Denmark, France, Germany, Greece, Hungary, Iran, Latvia, Lithuania, Mozambique, Norway, Peru, Poland, Romania, Russia, the Slovak Republic, Spain,
37. Language planning Sri Lanka, Sweden, Switzerland, Thailand, Ukraine, the United States, Uruguay, and Zimbabwe. ⫺ In Cuba, Mauritius, the Netherlands, and the United Kingdom, the national sign languages have been recognised politically, which has resulted in the funding of large national projects (e.g. DCAL in London) and institutions. In the Netherlands, for instance, the Dutch Sign Centre is partially funded for lexicographic activities and the Sign Language of the Netherlands (NGT) Interpreter/Teacher training programme was established at the University of Utrecht. Note, however, that this type of legal recognition is not sufficient under Dutch Law to infer a legal status of NGT itself as a language. The European Parliament unanimously approved a resolution about sign languages on June 17, 1988. The resolution asks all member countries for recognition of their national sign languages as official languages of the Deaf. So far, this resolution has had limited effect. In 2003, sign languages were recognised as minority languages in the European Charter on Regional or Minority Languages. Another way to pursue legal recognition might be via a new Human Rights charter for which linguistic human rights are a prerequisite and that will be ratified by all member states. In 1996, a number of institutions and non-governmental organizations, present at the UNESCO meeting in Barcelona, presented the Universal Declaration of Linguistic Rights, which takes language communities and not states as its point of departure (UNESCO 1996). One of the relevant articles in the light of recognition of sign languages is article 3. Sign language users and their languages have been in danger at various times throughout history. However, a growing number of people have been referring to sign languages as endangered languages ⫺ in fact, likely to become extinct in the near future ⫺ since Graham Turner expressed his concerns in 2004: We have seen a dramatic growth in several major types of threat to heritage sign languages: demographic shifts which alone will reduce signing populations sharply, the rapid uptake of cochlear implants […], the development and imminent roll-out of biotechnologies such as genetic intervention and hair-cell regeneration; and the on-going rise of under-skilled L2 users of sign language in professional positions, coinciding with a decline in concern over the politics of language among younger Deaf people. (Turner 2004, 180)
Legal recognition will not be sufficient to ensure the status of sign languages. A community that wants to preserve its language has a number of options. A spoken language example is that of Modern Hebrew, which was revived as a mother tongue after centuries of being learned and studied only in its ancient written form. Similarly, Irish has had considerable institutional and political support as the national language of Ireland, despite major inroads by English. In New Zealand, Maori communities established nursery schools staffed by elders and conducted entirely in Maori, called kohanga reo, “language nests” (Woodbury 2009). It is the duty of linguists to learn as much as possible about languages, so that even if a language disappears, knowledge of that language won’t disappear at the same time. To that end, researchers document sign language use in both formal and informal settings on video, along with translations and notations. In recent years, a growing number of projects has been established to compile digital sign language corpora; for
895
896
VII. Variation and change instance, for NGT (Crasborn/Zwitserlood/Ros 2008), British Sign Language (Schembri et al. 2009), and SSL (Bergman et al. 2011). These corpora will not only support the linguistic research that is needed to describe the individual languages, they will also provide access for learners of sign languages, and they will ensure preservation of the language as it is used at a given point in time (for digital sign language corpora, see also chapter 44, Computer Modelling). However, a language will only be truly alive and out of danger as long as there is a community of language users and the language is transmitted from generation to generation. Sign languages are extremely vulnerable in this respect.
4. Corpus planning One of the goals of corpus planning is the prescriptive intervention in the forms of a language. Corpus planning is concerned with the internal structure of a language, that is, with matters such as writing systems, standardisation, and lexical modernisation. There is no standardised writing system for sign languages comparable to the writing systems that exist for spoken languages. Rather, there are many different ways to notate signs and sign sentences based on Stokoe’s notation system (e.g. the Hamburg Notation System) or on dance writing systems (e.g. Sutton’s Sign Writing System; see chapter 43, Transcription, for details). The lack of a written system has contributed greatly to language variation within sign languages. In relation to most sign languages, standardisation has not been a goal in itself. Linguistic interest in sign languages has led to documentation of the lexicon and grammar of a growing number of sign languages. In this section, we will discuss standardisation and codification. In section 5, we will present a case study of an explicit form of standardisation as a prerequisite for legal recognition of a sign language, NGT in the Netherlands.
4.1. Standardisation A standard language is most commonly defined as a codified form of a language, which is understood as the uniform linguistic norm (Deumert 2001; Reagan 2001). The term ‘codified’ refers to explicit norms of a language specified in documents such as dictionaries and grammars. The concept of a standard language is often wrongly associated with the ‘pure’, ‘original’ form of a language ⫺ assuming there would be something like a ‘pure’ form of a language. Often, the most prestigious form of a language becomes standardised. The language variety of those who have power and status in society is often seen as the most prestigious one. Acceptance of this variety as the norm is vital for a successful standardisation. With respect to spoken languages, the standard form is most often the dialect that is associated with specific subgroups and with specific functions. In this context, it is interesting to see which essential features of modern Standard English David Crystal (1995, 110) has listed: ⫺ It is historically based on one dialect among many, but now has special status, without a local base. It is largely (but not completely) neutral with respect to regional identity.
37. Language planning ⫺ Standard English is not a matter of pronunciation, rather of grammar, vocabulary, and orthography. ⫺ It carries most ‘prestige’ within English speaking countries. ⫺ It is a desirable educational target. ⫺ Although widely understood, it is not widely spoken. Status planning and corpus planning are very closely related. If the status of a language needs to be raised, a form of corpus planning is required. For example, the lexicon needs to be expanded in order to meet the needs of different functions of the language. Different degrees of standardisation can be distinguished (based on Deumert 2001): 1. Un-standardised spoken or sign language for which no written system has been developed. 2. Partly standardised or un-standardised written language used mainly in primary education. The language is characterised by high degrees of linguistic variation. 3. Young standard language: used in education and administration, but not felt to be fit for use in science, technology, and at a tertiary or research level. 4. Archaic standard language: languages which were used widely in pre-industrial times but are not spoken any longer, such as classic Latin and Greek. 5. Mature modern standard language: employed at all areas of communication; for example English, French, German, Dutch, Italian, Swedish, etc. Most sign languages can be placed in stages 1⫺3. We have to distinguish active forms of standardising the language (see section 5 for further discussion) and more natural processes of language standardisation. Any form of codification of the language, however, will lead ⫺ even unintentionally ⫺ to some form of standardisation. This is the case for many sign languages, as will be discussed in the next paragraph.
4.2. Codification of the language The history of most sign languages is one of oppression by hearing educationalists. Until the mid 1960s, most sign languages were not viewed as fully-fledged languages on a par with spoken languages. Once sign language research starts in a country, usually the first major task one sets out to do is the compilation of a dictionary. Clearly, dictionaries are much more than just a text that describes the meaning of words or signs. The word ‘dictionary’ suggests authority, status, and scholarship: the size of the dictionary, the paper that is used, and its cover all attribute to the status of the language that has been described. The first dictionaries of sign languages did not only serve the purpose of describing the lexicon of the language; rather, for most Deaf communities, a sign language dictionary is a historic publication of paramount social importance which can be used as a powerful instrument in the advancement of highquality bilingual education as well as in the full exercise of the constitutional rights of deaf people (e.g. the first BSL/English dictionary (Brien 1992) and the first print publication of the standard signs of NGT, the Van Dale Basiswoordenboek NGT (Schermer/ Koolhof 2009)). Introductions to sign dictionaries often explicitly mention the fact that the purpose of the dictionary is to confirm and raise the status of the sign language.
897
898
VII. Variation and change Sign language dictionaries deal with variation in the language in different ways. Even though the primary intention of the majority of sign lexicographers is to document and describe the lexicon of a sign language, their choices in this process determine which sign varieties are included and which are not. Therefore, inevitably, many sign language lexicographers produce a standardising dictionary of the sign language or at least (mostly unintentionally) nominate one variant to be the preferred one. And even if this is not the intention of the lexicographer, the general public ⫺ especially hearing sign language learners ⫺ often interprets the information in the dictionary as reflecting the prescribed, rather than described language. The fact that sign languages lack a written form confronts lexicographers with a serious problem: which variant of a sign is the correct one and should thus be included as the citation form in the dictionary. Therefore, lexicographers have to determine, in one way or the other, whether an item in the language is used by the majority of a given population, or whether it is used by a particular subset of the population. To date only a few sign language dictionaries have been based on extensive research on language variation (e.g. for Auslan (Johnston 1989), NGT (Schermer/Harder/Bos 1988; Schermer et al. 2006; Schermer/ Koolhof 2009), and Danish Sign Language (Centre for Sign Language and Sign Supported Speech KC 2008)). Also, there are online dictionaries available which document the regional varieties of the particular sign language (e.g. the Flemish Sign Language dictionary (www.gebaren.ugent.be) and the work done on Swiss German Sign Language by Boyes Braem (2001)). In cases where sign language dictionaries have indeed been made with the explicit purpose of standardising the sign language in mind, but have not been based on extensive research on lexical variation, these attempts to lasting standardisation have usually failed because the deaf community did not accept the dictionary as a reflection of their sign language lexicon; this happened, for instance, in Flanders and Sweden in the 1970s. Another example of controversy concerns Japanese Sign Language (NS). Nakamura (2011) describes the debate about the way in which the dominant organization of deaf people in Japan (JFD) has tried since 1980 to maintain active control of the lexicon in a way that is no longer accepted by a growing part of the deaf community. The controversy is mostly about the way in which new lexicon is coined, which ⫺ according to members of D-Pro (a group of young Deaf people that has been active since 1993) ⫺ does not reflect the pure NS, which should exclude mouthings or vocalisations of words. Another form of codification is the description of the grammar of the language. Since sign language linguistics is a fairly young research field, to date very few comprehensive sign language grammars are available (see, for example, Baker/Cokely (1980) for ASL; Schermer et al. (1991) for NGT; Sutton-Spence/Woll (1999) for BSL; Johnston/Schembri (2007) for Auslan; Gébert/Adone (2006) for Mauritian Sign Language; Papaspyrou et al. (2008) for German Sign Language; and Meir/Sandler (2008) for Israeli Sign Language). As with dictionaries, most grammars are intended to be descriptive, but are viewed by language learners as prescriptive.
37. Language planning
5. A case study: Standardisation of Sign Language of the Netherlands (NGT) In this section, the process of standardisation will be illustrated by means of a case study: the standardisation of NGT. Schermer (2003) has described this process in full detail. The information from this article will be briefly summarised below. As a result of a decade of lobbying for the recognition of NGT by the Dutch Deaf Council, a covenant was signed in 1998 between all schools for the Deaf, the Organisation for Parents of Deaf Children (FODOK), the Ministry of Education, and the Ministry of Health and Welfare to carry out three projects the goal of which was to implement bilingual (NGT/Dutch) education for Deaf children. One of these projects was the Standardisation of the Basic Lexicon of NGT to be used in schools for the Deaf (referred to as the “STABOL” project). The projects were carried out between 1999⫺2002 by the Dutch Sign Centre, the University of Amsterdam, and the schools for the Deaf. The STABOL project was required by the Dutch government as a prerequisite for the legal recognition of NGT despite objections by the Dutch Deaf community and NGT researchers. In the period between 1980 (when research on NGT started) and 1999, a major project had been carried out which documented extensively the lexicon of NGT. The results of this socalled KOMVA project, which had yielded information about the extent of regional variation in NGT (cf. Schermer 1990, 2003), formed the basis for the standardisation project. The standardisation of the NGT basic lexicon was a highly controversial issue. As far as the Dutch government was concerned, it was not negotiable: without a standard lexicon, there could be no legal recognition of NGT. There was also an economic argument for standardising part of the lexicon: the development of NGT materials in different regional variants was expensive. Moreover, hearing parents and teachers were not inclined to learn different regional variants. The schools for the Deaf were also in favour of national NGT materials that could be used in NGT tests to monitor the development of linguistic skills and to set a national standard. The idea of standardisation, however, met with strong opposition from the Deaf community and from linguists in the Netherlands at that time. Probably, the concept of standardisation was difficult for the Deaf community to accept since it was not so long ago that their language had been suppressed by hearing people. And now again, it was hearing people who were enforcing some form of standardisation. The STABOL project was carried out by a group of linguists, native deaf signers (mostly deaf teachers), and native hearing signers in close cooperation with the Deaf community and coordinated by the Dutch Sign Centre. A network of Deaf signers from different regions was established. This network in turn maintained contacts with larger groups of Deaf people whose comments and ideas were shared with the project group, which made all of the final decisions. Within the project, a standard sign was defined as a sign that will be used nationally in schools and preschool programs for deaf children and their parents. It does not mean that other variants are not ‘proper signs’ that the Deaf community can no longer use. (Schermer 2003, 480)
899
900
VII. Variation and change
5.1. Method of standardisation The STABOL project set out to standardise a total of 5000 signs: 2500 signs were selected from the basic lexicon, which comprises all signs that are taught in the first three levels of the national NGT courses; 2500 signs were selected in relation to educational subjects. For this second group of signs, standardisation was not a problem since these were mostly new signs with very little or no variation. We will expand a little more on the first set of 2500 signs. The process of standardising NGT started in the early 1980s with the production of national sign language dictionaries which included all regional variants and preference signs. Preference signs are those signs that are identical in all five regions in the Netherlands (Schermer/Harder/Bos 1988). Discussions amongst members of the STABOL project group revealed that the procedures we had used in previous years (selection of preference signs) had actually worked quite well. The STABOL project group decided to use the set of linguistic guidelines in their meetings that had been developed based on previous research (see Schermer (2003) for details). In principle, signs that were the same nationally (i.e. those that were labelled “preference signs” in the first dictionaries) were accepted as standard signs. The 2500 signs from the basic lexicon that were standardised in the STABOL project can be characterised as follows: ⫺ 60 % of the signs are national signs that are recognised and/or used with the same meaning in all regions, no regional variation; ⫺ 25 % of the signs are regional signs that have been included in the standard lexicon; ⫺ for 15 % of the signs, a selection was made for a standard sign.
Fig. 37.1: Regional NGT variants included as synonyms
37. Language planning
Fig. 37.2: Regional NGT variants included as signs with refined meaning
Hence, for 25 % of the signs, regional variation was included in the standard lexicon in the following ways. First, regional variation is included in the standard lexicon in the form of synonyms. This is true, for example, for the signs careful and mummy shown in Figure 37.1. The reason for including these regional signs as synonyms was the fact that the members of the STABOL group could not agree on one standard sign based on the set of criteria. In this manner, a great number of synonyms were added to the lexicon. Apart from synonyms, regional variation is included through refining the meaning of a sign; for example, the signs horse and ride-on-horseback, baker and bakery. In the Amsterdam region, the sign horse (Figure 37.2b) was used for both the animal and the action of riding on a horseback. In contrast, in the Groningen region, the sign horse (Figure 37.2a) was only used for the animal and not for horseback riding. In the standardisation process, the Groningen sign became the standard sign horse while the Amsterdam sign became the standard sign for ride-on-horseback (Figure 37.2b). Consequently, both regional variants were included. In the STABOL project, for only a few hundred signs out of the 2500 standardised signs, an explicit choice was made between regional variants based on linguistic criteria as mentioned earlier in this chapter. One of the reasons that the NGT standard lexicon has been accepted by teachers of the Deaf who had to teach standard signs rather than their own regional variants, might be the fact that the actual number of signs that has been affected by the standardisation process is quite low. Note, however, that the standard lexicon was introduced in the schools for the Deaf and in the NGT course materials; the Deaf adult population continued to use regional variants. It is interesting to note that Deaf children of Deaf parents who are aware of the fact that there is a difference in signing between their Deaf parents, their Deaf grandparents, and themselves and who have been educated with the standard signs, identify with these standard signs as their own signs (Elferink, personal communication).
5.2. Results and implementation As a result of the STABOL project, 5000 signs were standardised and made available in 2002. Since then, the Dutch Sign Centre has continued to make an inventory of
901
902
VII. Variation and change signs, to develop new lexicon, and to disseminate NGT lexicon. The database currently contains 16,000 signs of which 14,000 have been made available in different ways. In Figure 37.3, the distribution of these 14,000 signs is shown: 25 % of the standard signs have been standardised within the STABOL project, 42 % of the signs are existing national signs (no regional variation), and 33 % of the signs are new lexical items (mostly signs that are used in health and justice and for school subjects).
Fig. 37.3: Distribution of signs in NGT standard sign language dictionaries
Naturally, the establishment of a standard sign alone is not sufficient for standardising a lexicon, the implementation of the NGT standard lexicon is coordinated by the Dutch Sign Centre and involves several activities, some of which are on-going: ⫺ Workshops were organised to inform the Deaf community and NGT teachers about the new lexicon. ⫺ The lexicon was dispersed via DVD-ROMs and all national NGT course materials have been adapted to include standard signs. ⫺ All schools for the Deaf have adopted the standard lexicon and all teachers are required to learn and teach standard NGT signs since 2002. ⫺ On television, only standard signs are used by the NGT interpreters. ⫺ The NGT curriculum that was developed for primary deaf schools also contains standard NGT signs. ⫺ Since 2006, online dictionaries are available with almost 14,000 standard signs. As of 2011, regional variants are also shown in the main online dictionary. The dictionaries are linked to the lexical database; both the dictionaries and the database are maintained by the Dutch Sign Centre and updated daily. ⫺ In 2009, the first national standard NGT dictionary (3000 signs) has been published in book form (Schermer/Koolhof 2009), followed by the online version with 3000 sign movies and 3000 example sentences in NGT (2010). Some people view the production of dictionaries with standard signs as avoiding the issue of regional variation altogether (see chapter 33, Sociolinguistic Aspects of Variation and Change). This is not the case in the Netherlands: in the 1980s, an inventory of regional variation was made based on a large corpus and, contrary to most other countries at that time, our first sign language dictionaries contained all regional variants. Without thorough knowledge of lexical variation, standardisation of NGT lexicon and the implementation of the standard signs in all schools for the deaf and teaching materials would not have been possible. In 2011, a large project was initiated by the
37. Language planning Dutch Sign Centre to include films of the original data that were collected in 1982 in the database and make the regional variation available in addition to the standard lexicon. Note finally that, despite the fact that the basic lexicon of NGT was standardised in 2002, the Dutch Government still has not recognised NGT legally as a language. There are a number of implicit legal recognitions in the Netherlands, such as the right to have NGT interpreters and the establishment of the NGT teacher/interpreter training programme, but this is not considered to be a legal recognition of NGT as a language used in the Netherlands. An important reason why NGT has not been legally recognised within the Dutch constitution is that spoken Dutch is not officially recognised as a language in the Dutch constitution either. The Dutch Deaf Council and the Dutch Sign Centre are still working on some form of legal recognition of NGT as a language.
6. Lexical modernisation For almost a century, throughout Western Europe, most sign languages have been forbidden in the educational systems and have not been used in all parts of society. At least the latter is also true for sign languages of other continents. As a consequence, there are deficiencies in the vocabulary compared to the spoken languages of the hearing community. The recognition of sign languages, the introduction of bilingual programmes in deaf education, and the continuing growth of educational sign language interpreting at secondary and tertiary levels of education have created an urgent need for a coordinated effort to determine and develop new signs for various contexts, such as, for example, signs for technical terms and school subjects. A productive method for coining new signs is to work with a team of native deaf signers, (deaf) linguists, and people who have the necessary content knowledge. Nice examples of a series of dictionaries ⫺ aimed at specific professions for which new signs had to be developed ⫺ are the sign language dictionaries produced by the Arbeitsgruppe Fachgebärden (‘team for technical signs’) at the University of Hamburg. In the past 17 years, the team has compiled, for instance, lexicons on psychology (1996), carpentry (1998), and health care (2007). In the Netherlands, the NGT lexicon has been expanded systematically since 2000. A major tool in the development and the dispersion of new lexical items is a national database and an online dictionary, all coordinated by one national centre, the Dutch Sign Centre. The Dutch Ministry of Education is funding the Dutch Sign Centre specifically for maintaining and developing the NGT lexicon. This is crucial for the development of (teaching) materials, dictionaries, and the implementation of bilingual education for deaf children.
7. Acquisition planning As described before, acquisition planning concerns the teaching and learning of languages. Some form of acquisition planning is required to change the status of a language and to ensure the survival of a language. Ideally, a nationally funded institute
903
904
VII. Variation and change or academy (such as, for instance, the Academie Française for French or the Fryske Academie for Frisian) should coordinate the distribution of teaching materials, the development of dictionaries and grammars, and the development of a national curriculum comparable to the European Framework of Reference for second language learning. Even though the situation has improved greatly for most sign languages in the last 25 years, their position is still very vulnerable and in most countries depends on the efforts of a few individuals. Acquisition planning, corpus planning, and status planning are very closely related. With respect to sign languages, in most cases, there is no systematic plan in relation to these three types of planning. While there is not one plan that suits all situations, there are still some general guidelines that can be followed: ⫺ Describe the state of affairs with respect to status planning, corpus planning, and acquisition planning in your country. ⫺ Identify the stakeholders and their specific interest in relation to sign language; for example: sign language users (Deaf community, but also hard of hearing people who use a form of sign-supported speech), educationalists, care workers, researchers, parents of deaf children, hearing sign language learners, interpreters, government, etc. ⫺ Identify the needs and goals of each of the stakeholders that need to be achieved for each of the types of planning and make a priority list. ⫺ Identify the steps that need to be taken, the people who need to be involved and who need to take responsibility, estimate the funding that is necessary, and provide a timetable. Acquisition planning is crucial for the development and survival of sign languages and should be taken more seriously by sign language users, researchers, and governments than has been done to date. It is time for a National Sign Language Academy in each country, whose tasks should include the preservation of the language, the protection of the rights of the language users, and the promotion of the language by developing adequate teaching materials.
8. Conclusion In this chapter, three aspects of language planning have been described for sign languages: status planning, corpus planning, and acquisition planning. Within status planning, in most countries the focus of attention is usually on the legal recognition of the national sign language. In the year 2011, only 42 countries have legally recognised a national sign language in one way or another. Even though legal recognition may imply some form of protection for sign language users, it does not solve all problems. As more and more linguists point out, sign languages are endangered languages. Ironically, now that sign languages are finally taken seriously by linguists and hearing societies, their time is almost up as a consequence of the medical perspective on deafness and rapid technological development. Languages only exist within language communities, but the existence of signing communities is presently at risk for several reasons, the main one being the decreasing number of native deaf signers around the world. This
37. Language planning decrease is a consequence of reduced or no sign language use with deaf children who received a cochlear implant at a very young age and, more generally, of the fact that deaf communities are increasingly heterogeneous. With respect to corpus planning, we have discussed standardisation and lexical modernisation. Standardisation of languages in general is a controversial issue. There are only few examples of efforts to standardise a sign language. At the same token, one has to be aware of the fact that any form of codification of a language implies some form of standardisation, even unintentionally. The process of standardisation of the NGT lexicon has been discussed as an example of a specific form of standardisation, based on thorough knowledge of the lexical variation existing in the language. Finally, in order to strengthen the position of sign languages around the world, it is necessary to work closely together with the Deaf community, other users of sign language, and researchers ⫺ within different countries and globally ⫺ in an attempt to draft an acquisition plan, to provide language learners with adequate teaching materials, and to describe and preserve the native languages of deaf people.
9. Literature Ahlgren, Inger/Bergman, Brita (eds.) 1980 Papers from the First International Symposium on Sign Language Research. June 10⫺ 16, 1979. Leksand, Sweden: Sveriges Dövas Riksförbund. Baker, Charlotte/Cokely, Dennis 1980 American Sign Language. A Teacher’s Resource Text on Grammar and Culture. Silver Spring, MD: T.J. Publishers. Bergman, Brita/Nilsson, Anna-Lena/Wallin, Lars/Björkstrand, Thomas 2011 The Swedish Sign Language Corpus: www.ling.su.se. Boyes Braem, Penny 2001 A Multimedia Bilingual Database for the Lexicon of Swiss German Sign Language. In: Bergman, Brita/Boyes Braem, Penny/Hanke, Thomas/Pizzuto, Elena (eds.), Sign Transcription and Database Storage of Sign Information (Special issue of Sign Language & Linguistics 4(1/2)), 241⫺250. Branson, Jan/Miller, Don/Marsaja, I Gede 1996 Everyone Here Speaks Sign Language, too: A Deaf Village in Bali, Indonesia. In: Lucas, Ceil (ed), Multicultural Aspects of Sociolinguistics in Deaf Communities. Washington, DC: Gallaudet University Press, 39⫺61. Brien, David (ed.) 1992 Dictionary of British Sign Language/English. London: Faber and Faber. Centre for Sign Language and Sign Supported Speech KC 2008 Ordbog over Dansk Tegnsprok. Online dictionary: www.tegnsprok.dk. Conrad, Richard 1979 The Deaf Schoolchild: Language and Cognitive Function. London: Harper and Row. Cooper, Robert 1989 Language Planning and Social Change. Bloomington, IN: Indiana University Press. Crasborn, Onno/Zwitserlood, Inge/Ros, Johan 2008 Sign Language of the Netherlands (NGT) Corpus: www.ngtcorpus.nl. Crystal, David 1995 The Cambridge Encyclopedia of the English Language. Cambridge: Cambridge University Press.
905
906
VII. Variation and change Deumert, Ana 2001 Language Planning and Policy. In: Mesthrie, Rajend/Swann, Joan/Deumert, Andrea/ Leap, William (eds.), Introducing Sociolinguistics. Edinburgh: Edinburgh University Press, 384⫺419. Fishman, Joshua A. 1983 Modeling Rationales in Corpus Planning: Modernity and Tradition in Images of the Good Corpus. In: Cobarrubias, Juan/Fishman, Joshua A. (eds.), Progress in Language Planning: International Perspectives. Berlin: Mouton, 107⫺118. Gébert, Alain/Adone, Dany 2006 A Dictionary and Grammar of Mauritian Sign Language, Vol. 1. Vacoas, République de Maurice: Editions Le Printemps. Groce, Nora 1985 Everyone Here Spoke Sign Language: Hereditary Deafness on Martha’s Vineyard. Cambridge, MA: Harvard University Press. Haugen, Einar 1968 Language Planning in Modern Norway. In: Fishman, Joshua A. (ed.), Readings in the Sociology of Language. The Hague: Mouton, 673⫺687. Johnston, Trevor 1989 Auslan Dictionary: A Dictionary of Australian Sign Language (Auslan). Adelaide: TAFE National Centre for Research and Development. Johnston, Trevor/Schembri, Adam 2007 Australian Sign Language. An Introduction to Sign Linguistics. Cambridge: Cambridge University Press. Klima, Edward/Bellugi, Ursula 1979 The Signs of Language. Cambridge, MA: Harvard University Press. Krausneker, Verena 2003 Has Something Changed? Sign Languages in Europe: The Case of Minorised Minority Languages. In: Deaf Worlds 19(2), 33⫺48. Krausneker, Verena 2008 The Protection and Promotion of Sign Languages and the Rights of Their Users in the Council of Europe Member States: Needs Analysis. Integration of People with Disabilities Division, Social Policy Department, Directorate General of Social Cohesion, Council of Europe [http://www.coe.int/t/DG3/Disability/Source/Report_Sign_languages_ final.pdf]. Ladd, Paddy 2003 Understanding Deaf Culture. In Search of Deafhood. Clevedon: Multilingual Matters Ltd. Lane, Harlan 2002 Do Deaf People Have a Disability? In: Sign Language Studies 2(4), 356⫺379. Meir, Irit/Sandler, Wendy 2008 A Language in Space. The Story of Israeli Sign Language. New York, NY: Lawrence Erlbaum. Monaghan, Leila 2003 A World’s Eye View: Deaf Cultures in Global Perspective. In: Monaghan, Leila/Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be Deaf. International Variation in Deaf Communities. Washington, DC: Gallaudet University Press, 1⫺24. Monaghan, Leila/Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.) 2003 Many Ways to Be Deaf. International Variation in Deaf Communities. Washington, DC: Gallaudet University Press. Nakamura, Karen 2011 The Language Politics of Japanese Sign Language (Nihon Shuwa). In: Napoli, Donna Jo/Mathur, Gaurav (eds.), Deaf Around the World. The Impact of Language. Oxford: Oxford University Press, 316⫺332.
37. Language planning Nover, Stephen 2000 History of Language Planning in Deaf Education; the 19 th Century. PhD Dissertation, University of Arizona. Nyst, Victoria 2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, University of Amsterdam. Utrecht: LOT. Orwell, George 1946 Politics and the English Language. London: Horizon. Papaspyrou, Chrissostomos/Meyenn, Alexander von/Matthaei, Michaela/Herrmann, Bettina 2008 Grammatik der Deutschen Gebärdensprache aus der Sicht gehörloser Fachleute. Hamburg: Signum. Reagan, Timothy 2001 Language Planning and Policy. In: Lucas, Ceil (ed.), The Sociolinguistics of Sign Languages. Cambridge: Cambridge University Press, 145⫺180. Schembri, Adam/Fenlon, Jordan/Stamp, Rose/Rentelis, Ramas 2009 British Sign Language Corpus Project: Documenting and Describing Variation and Change in BSL. Paper Presented at the Workshop Sign Language Corpora: Linguistic Issues, University College London. Schermer, Trude 1990 In Search of a Language. PhD Dissertation, University of Amsterdam. Delft: Eburon Publishers. Schermer, Trude 2003 From Variant to Standard: An Overview of the Standardisation Process of the Lexicon of the Netherlands Over Two Decades. In: Sign Language Studies 3(4), 469⫺487. Schermer, Trude/Fortgens, Connie/Harder, Rita/Nobel, Esther de (eds.) 1991 De Nederlandse Gebarentaal. Deventer: Van Tricht. Schermer, Trude/Geuze, Jacobien/Koolhof, Corline/Meijer, Elly/Muller, Sarah 2006 Standaard Lexicon Nederlandse Gebarentaal, Deel 1 & 2 (DVD-Rom). Bunnik: Nederlands Gebarencentrum. Schermer, Trude/Harder, Rita/Bos, Heleen 1988 Handen uit de Mouwen: Gebaren uit de Nederlandse Gebarentaal in Kaart Gebracht. Amsterdam: NSDSK/Dovenraad. Schermer, Trude/Koolhof, Corline (eds.) 2009 Van Dale Basiswoordenboek Nederlandse Gebarentaal. Utrecht: Van Dale. [www. gebarencentrum.nl] Stokoe, William C. 1960 Sign Language Structure: An Outline of the Visual Communication System of the American Deaf. In: Studies in Linguistics Occasional Papers 8. Buffalo: University of Buffalo Press [Re-issued 2005, Journal of Deaf Studies and Deaf Education 10(1), 3⫺ 37]. Sutton-Spence, Rachel/Woll, Bencie 1999 The Linguistics of British Sign Language. An Introduction. Cambridge: Cambridge University Press. Turner, Graham 2004 To the People: Empowerment through Engagement in Sign Sociolinguistics and Language Planning. In: Theoretical Issues in Sign Language Research (TISLR 8), Barcelona, Sept. 30⫺Oct. 2, 2004. Abstract booklet, 180⫺181. UNESCO 1996 Universal Declaration of Linguistic Rights. Barcelona, June 9th 1996. [Available at: www.linguistic-declaration.org/decl-gb.htm]. Wheatley, Mark/Pabsch, Annika 2010 Sign Language Legislation in the European Union. Brussels: EUD.
907
908
VII. Variation and change Woodbury, Anthony 2009 What Is an Endangered Language? Linguistic Society of America (LSA) publication: www.lsadc.org/info/pdf_files/Endangered_Languages.pdf.
Trude Schermer, Bunnik (The Netherlands)
VIII. Applied issues 38. History of sign languages and sign language linguistics 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.
Introduction Sign languages: some initial considerations Early perceptions of sign languages The development of deaf education and scholarly interest in sign languages The emergence of deaf communities, sign languages, and deaf schools The rise of oralism in the late 19th century Sign language linguistics: a discipline is born The establishment of the discipline Historical relationships between sign languages Trends in the field Conclusion Literature
Abstract While deaf individuals have used signs to communicate for centuries, it is only relatively recently (around the time of the Industrial Revolution) that communities of deaf people have come together and natural sign languages have emerged. Public schools for the deaf, established first in France and eventually across much of Europe and North America, provided an environment in which sign languages could flourish. Following a clear shift toward oralism in the late 19 th century, however, sign languages were viewed by many as crude systems of gestures and signing was banned in most schools. Sign languages continued to thrive outside the classrooms and in deaf communities and clubs, however, and by the mid 20 th century oralism began to wane. While scholarly interest in sign languages dates back to the Enlightenment, modern linguistic research began only in 1960. In the years since, the discipline of sign language linguistics has grown considerably, with research on over one hundred sign languages being conducted around the globe. Although the genetic relationships between the world’s sign languages have not been thoroughly researched, we know that historical links have in many cases resulted from migration, world politics, as well as the export of educational systems.
1. Introduction Natural sign languages, the complex visual-gestural communication systems used by communities of deaf people around the world, have a unique and fascinating history. While deaf people have almost certainly been part of human history since the beginning, and have likely always used gesture and signs to communicate, until fairly recent
910
VIII. Applied issues times (the past three centuries), most deaf people lived in isolation in villages and towns. Where the incidence of deafness was great enough, village sign languages may have developed; excepting this, most deaf people used homesigns to communicate. It is only relatively recently, within the last 300 years or so, that deaf people have come together in great enough numbers for deaf communities to emerge and full natural sign languages to develop. Deaf communities began to emerge in Europe during the Industrial Revolution of the late 18th and early 19th centuries. With the transformation of traditional agricultural economies into manufacturing economies, large numbers of people relocated from rural areas to the towns and cities where manufacturing centers were located, and for the first time greater numbers of deaf people were brought together. The related development that contributed most directly to the formation of deaf communities and the emergence and development of natural sign languages was the establishment of schools for deaf children around Europe, beginning in the mid 18th century. It was in the context of deaf communities and deaf schools that modern sign languages emerged. This chapter presents an overview of the history of sign languages as well as the development of the discipline of sign language linguistics. Section 2 provides general background information on sign languages, and in section 3, early perceptions of sign languages are discussed. Section 4 traces the development of deaf education and early scholarly interest in sign languages that took place in 16th century Spain and Britain. The emergence of deaf communities, full natural sign languages, and public schools for the deaf are examined in section 5. Section 6 covers the rise of oralism in the late 19th century and the resulting educational shift that took place. Sections 7 and 8 chronicle the birth, development, and establishment of the discipline of sign language linguistics. The historical relationships between sign languages are examined in section 9, and section 10 reviews some overall trends in the field of sign language linguistics.
2. Sign languages: some initial considerations As is the case with all human languages, sign languages are the product of human instinct, culture, and interaction; whenever deaf individuals are great enough in number, a natural sign language will emerge. Sign languages are, however, unique among human languages in several respects. Most obvious, and the difference that sets sign languages apart as a clearly delineated subset of human languages, is the mode of transmission ⫺ sign languages are visual-gestural as opposed to aural-oral. Indeed, there are a number of distinctions between the two language modalities, distinctions that may underlie some of the linguistic differences that have been noted between signed and spoken languages (see Meier (2002); also see chapter 25, Language and Modality, for details). Because so few deaf children are born into deaf signing families (estimates range between 2 and 10 percent; see Mitchell/Karchmer 2004), very few deaf individuals acquire sign language in a manner similar to the way most hearing individuals acquire spoken language ⫺ as a first language, in the home, from parents and siblings who are fluent in the language. Historically, most deaf people were largely isolated from each other, and used simple homesigns and gestures to communicate with family and friends
38. History of sign languages and sign language linguistics (see Frishberg (1987) for a framework for identifying and describing homesign systems; see Goldin-Meadow (2003) for a thorough discussion of gesture creation in deaf children and the resilience of the language learning process; see Stone/Woll (2008) for a look at homesigning in 18th and 19th century Britain; also see chapter 26). There have been some exceptions to this, however, in a few, mostly remote, communities where the incidence of hereditary deafness among an isolated population is high enough such that an indigenous sign language emerged and was used alongside the spoken language (see chapter 24, Shared Sign Languages, for details). While sign languages have developed as minority languages nested within spoken language environments, individual sign languages have complex grammatical structures that are quite distinct from those found in the majority spoken language surrounding them. Although most deaf people do not have access to majority languages in their primary (spoken) form, in literate societies they do have access to, and indeed are surrounded by, a secondary form of the majority language ⫺ print. Any time two languages coexist in this manner there are bound to be cross-linguistic influences at work, and this is definitely the case with sign languages (see chapter 35, Language Contact and Borrowing). With so few deaf children born into deaf families, most users of sign languages are non-native, having been exposed to sign language as older children or, not infrequently, adults. As a result, deaf social clubs and educational institutions (in particular residential schools for deaf children) have played, and continue to play, a major role in the transmission of culture and language within deaf communities around the world. Over the years, and in most countries where programs of deaf education have been established, hearing educators have developed manual codes to represent aspects of the majority spoken language (usually grammatical aspects). Also referred to as manually coded languages (MCLs), these artificial sign systems (Signed German and Signed Japanese, for example) usually adopt the word order of the spoken language but incorporate the lexical signs of the native sign language. Because language contact is so pervasive in most deaf schools and communities, the various forms of language that are used have been analyzed as comprising a sign language continuum; in the case of the United States, American Sign Language (ASL), with a grammar distinct from spoken English, would be at one end, and Signed English at the other. The middle region of this continuum, often referred to as contact signing, exhibits features of both languages (Lucas/Valli 1989, 1992). Throughout history, the culture and language of deaf people have been strongly influenced by, indeed some would argue at times defined by, members of another culture ⫺ hearing people. A complex relationship exists between members of deaf communities and the individuals who have historically tried to “help” them, in particular experts in the scientific, medical, and educational establishments (see Lane 1992; Lane/ Hoffmeister/Bahan 1996; Ladd 2003). At various points and to varying degrees throughout history, sign languages have been rejected by the larger hearing society and, as a result, communities of deaf people have been forced to take their language underground. Indeed, some have argued that the goal of many educational policies and practices has been to prevent deaf people from learning or using sign languages to communicate (the ‘oralism’ movement, see section 6). This sociolinguistic context, one laden with discrimination and linguistic oppression, has without question had an impact on the emergence and use of sign languages around the world.
911
912
VIII. Applied issues
3. Early perceptions of sign languages Although we have only very limited historical accounts upon which to rely, knowledge of sign language use among deaf people dates back at least 2,000 years in Western civilizations. One of the earliest mentions of sign language surfaces in a series of Egyptian texts dating to approximately 1200 BC. In a section of warnings to the idle scribe, a magistrate admonishes, “Thou art one who is deaf and does not hear, to whom men make (signs) with the hand” (Gardiner 1911, 39, in Miles 2005). Also among the earliest written records of sign language and deaf people are the statements of Socrates in Plato’s dialogue Cratylus, which dates back to the 4th century BC: “And here I will ask you a question: Suppose that we had no voice or tongue, and wanted to communicate with one another, should we not, like the deaf and dumb, make signs with the hands and head and the rest of the body?” (Plato, in Jowett 1931, 368). Dating to the late second century AD, a discussion of the legal status of signing can be found in the Mishnah, a collection of Jewish oral law: “A deaf-mute may communicate by signs and be communicated with by signs.” (Gitten 5:7, in Danby 1933, 313). Much of what we know of deafness during pre-Renaissance times has been gleaned from the theological literature. Among his writings from the 4th century, St. Augustine discusses gestures and signs as an alternative to spoken language for the communication of ideas. He notes that deaf people “signify by gesture and without the use of words, not only things which can be seen, but also many others and almost everything that we say” (St. Augustine, in Oates 1948, 377). Recent historical research has confirmed that a number of deaf people (as many as 200 at one time) worked in the Turkish Ottoman court during the 15th through 20th centuries. Their sign language was valued, often used by hearing members of the court (including many sultans), and was recognized as being capable of expressing a full range of ideas (Miles 2000). While these early references reveal that sign languages were considered by some to be appropriate communication systems for deaf people, an alternate view was held by many; namely, that signing was inferior and that knowledge, as well as spiritual salvation, could only be gained through the spoken word. This perception dates back to the 4th century BC and the writings of the Greek philosopher Aristotle who, in his treatise On Sensation and the Sensible and other works, suggested that the sense of hearing was essential for the development of intelligence and reason (Aristotle, in Hammond 1902). It was assumed that sound was the basis of language and, by extension, thought. While Aristotle never asserted that deaf people could not be educated, his writings came to be interpreted as characterizing deaf individuals as “senseless and incapable of reason,” and “no better than the animals of the forest and unteachable” (Hodgson 1954, 62). These sentiments formed the early perceptions of sign language and the educability of the deaf, and lived on in the minds of many for hundreds of years.
4. The development of deaf education and scholarly interest in sign languages It was not until the 16th century that the Aristotelian dogma concerning the status of deafness began to be challenged in Western Europe. The Italian physician and mathe-
38. History of sign languages and sign language linguistics matician Gerolamo Cardano, the father of a deaf son, recognized that deafness did not preclude learning and education; on the contrary, he argued that deaf people could learn to read and write, and that human thoughts could be manifest either through spoken words or manual gestures (see Radutzky 1993; Bender 1960). At roughly the same time, the earliest efforts to educate deaf people emerged in Spain, where a handful of wealthy Spanish families were able to hire private tutors for their deaf children. Around the mid 16th century, the Benedictine monk Pedro Ponce de León undertook the education of two deaf brothers, Francisco and Pedro de Velasco. Widely cited as the first teacher of deaf children, Ponce de León initiated a school for deaf children within the monastery at Oña. While the prevailing view in Spain held that deaf children were uneducable and could not be taught to speak, de León was successful in teaching his students to talk ⫺ an intellectual breakthrough that brought him considerable fame (Plann 1997). Records indicate that de León taught nearly two dozen students over the course of his time at the monastery, utilizing a method that included writing, a manual alphabet, and also signs ⫺ both the Benedictine signs that had been used by monks who had taken a vow of silence (see chapter 23, Manual Communication Systems: Evolution and Variation, for details), as well as the “homesigns” that the de Velasco brothers had developed while living at home with their two deaf sisters (Plann 1993). The manual alphabet used by de León was likely the same set of standardized handshapes used by the Franciscan monk Melchor de Yebra and eventually published in 1593 (for a thorough discussion of the role that Benedictines played in the history of deaf education, see Daniels 1997). Many of the manual alphabets currently used in sign languages around the world are descendants of this one-handed alphabet. In the 17th century, though still available only to the privileged class, deaf education in Spain moved out of the monastery (Plann 1997). With this move came a change of methodology; methods originally developed for teaching hearing children were employed with deaf children, with a focus on phonics as a tool to teach speech and reading. During this time, Manuel Ramírez de Carrión served as a private tutor to Luis de Velasco, the deaf son of a Spanish constable. De Carrión likely borrowed heavily from the methods of Pedro Ponce de León, though he was quite secretive about his instructional techniques. In 1620, the Spanish priest Juan Pablo Bonet published an influential book, Reducción de las letras y arte para enseñar a hablar a los mudos (“Summary of the letters and the art of teaching speech to the mute”). The book lays out a method for educating deaf children that focuses on the teaching of speech (reading and writing were taught as a precursor to speech), and as such constitutes the first written presentation of the tenants of oralism. While Bonet himself had little direct experience teaching deaf children, he had served as secretary to the head of the Velasco household during the time de Carrión was employed there, and thus the methods Bonet presents as his own were likely those of de Carrión (Plann 1997). Nevertheless, Bonet’s book, which includes a reproduction of de Yebra’s fingerspelling chart, was pivotal to the development of deaf education and is often referred to as its literary foundation (Daniels 1997). In Britain, the physician and philosopher John Bulwer was the first English writer to publish on the language and education of the deaf (Woll 1987). Bulwer’s Chirologia, or the Natural Language of the Hand (1644), is a study of natural language and gestures and provides early insight into the sign language of deaf people in 17th century Britain, including an early manual alphabet. Dedicated to two deaf brothers, Bulwer’s Philoco-
913
914
VIII. Applied issues phus (1648) is based largely on Sir Kenelm Digby’s (1644) account of the deaf education efforts in Spain, but also lays out Bulwer’s (apparently unfulfilled) plans to start a school for deaf children in England (Dekessel 1992, 1993). The education of deaf children had sparked the interest of some of Bulwer’s contemporaries as well, among them the British mathematician John Wallis, who served as a tutor to at least two young deaf children in the 1650s and laid the groundwork for the development of deaf education in Britain. Records of his teaching techniques indicate that he used, among other things, the sign language and two-handed manual alphabet that were used by deaf people of that time (Branson/Miller 2002). In 1680, Scottish intellectual George Dalgarno published Didascalocophus; or, the Deaf and Dumbe Man’s Tutor, a book on the education of deaf children in which he explains in greater detail the two-handed manual alphabet and advocates for its use in instruction and communication. While Dalgarno’s alphabet was not widely adopted, the direct ancestor of the modern British two-handed alphabet first appeared in an anonymous 1698 publication, Digiti-lingua (Kyle/Woll 1985). Outside the realm of education, deaf people and sign languages were increasingly the subject of cultural fascination, and by the early 18th century had emerged as a compelling focus for philosophical study during the age of enlightenment. Among scholars of the day, sign languages were considered an important and legitimate object of study because of the insight they provided into the nature and origin of human language as well as the nature of the relationship between thought and language (see Kendon 2002). Italian philosopher Giambattista Vico argued that language started with gestures that had “natural relations” with ideas, and in his view sign languages of deaf people were important because they showed how a language could be expressed with “natural significations”. Following this line of thinking, French philosopher Denis Diderot, in his Lettre sur les sourds et muets (1751, in Meyer 1965), posited that the study of natural sign languages of deaf people, which he believed were free from the structures of conventional language, might bring about a deeper understanding of the natural progression of thought. On the origin of language, French philosopher Étienne Bonnot de Condillac explored the idea that human language began with the reciprocation of overt actions, and that the first forms of language were thus rooted in action or gesture (see chapter 23 for further discussion).
5. The emergence of deaf communities, sign languages, and deaf schools 5.1. Europe In the years before the Industrial Revolution, most deaf people were scattered across villages and the homesigns used for communication within families and small communities were likely highly varied (see chapter 26 for discussion of homesign). But with the onset of the Industrial Revolution, as large numbers of people migrated into towns and cities across Europe, communities of deaf people came together and natural, more standardized, sign languages began to emerge. The first published account of a deaf community and natural sign language was authored by a deaf Frenchman, Pierre Des-
38. History of sign languages and sign language linguistics loges. In his 1779 book, Observations d’un sourd et muèt, sur un cours elémentaire d’education des sourds et muèts (“A deaf person’s observations about an elementary course of education for the deaf”), Desloges writes about the sign language used amongst the community of deaf people that had emerged in Paris by the end of the 18th century (Fischer 2002), now referred to as Old French Sign Language (Old LSF). Desloges’ book was written to defend sign language against the false charges previously published by Abbé Deschamps, a disciple of Jacob Pereire, an early and vocal oralist. Deschamps believed in oral instruction of deaf children, and advocated for the exclusion of sign language, which he denigrated as limited and ambiguous (Lane 1984). Perhaps because of the unusually vibrant scholarly and philosophical traditions that were established in the age of the French Enlightenment, and in particular the focus on language as a framework for exploring the structure of thought and knowledge, the French deaf community and emerging natural sign language were fairly well documented when compared to other communities across Europe. Nevertheless, despite a paucity of historical accounts, we know that throughout the 18th and 19th centuries, deaf communities took root and independent sign languages began to evolve across Europe and North America. One of the most important developments that led to the growth of natural sign languages was the establishment of public schools for deaf children, where deaf children were brought together and sign language was allowed to flourish. The first public school for deaf children was founded by the Abbé Charles-Michel de l’Epée in Paris, France in the early 1760s. In his charitable work with the poor, de l’Epée had come across two young deaf sisters who communicated through sign (possibly the Old LSF used in Paris at that time). When asked by their mother to serve as the sisters’ teacher, de l’Epée agreed and thus began his life’s work of educating deaf children. While de l’Epée is often cited as the “inventor” of sign language, he in fact learned natural sign language from the sisters and then, believing he needed to augment their signing with “grammar”, developed a structured method of teaching French language through signs. De l’Epée’s Institution des sourds et muets, par la voie des signes méthodiques (1776) outlines his method for teaching deaf children through the use of “methodical signs”, manual gestures that represented specific aspects of French grammar. This method relied, for the most part, on manual signs that were either adapted from natural signs used within the Paris deaf community or invented (and therefore served as a form of “manually coded French”). In the years following the establishment of the Paris school, schools for deaf children were opened in locations throughout France, and eventually the French manual method spread across parts of Europe. Following de l’Epée’s death in 1789, Abbé Roche Amboise Sicard, a student of de l’Epée’s who had served as principal of the deaf school in Bordeaux, took over the Paris school. De l’Epée and his followers clearly understood the value of manual communication as an educational tool and took a serious interest in the “natural language” (i.e. LSF) used among deaf people (see Seigel 1969). One of de l’Epée’s followers, Auguste Bébian, was an avid supporter of the use of LSF in the classroom. Bébian’s most influential work, the one that relates most directly to modern linguistic analyses of sign languages, is his 1825 Mimographie: Essai d’écriture mimique, propre à régulariser le langage des sourds-muets. In this work, Bébian introduces a sign notation system that is based on three cherological (phonological) aspects: the movement, the “instruments du geste” (the means of articulation), and the
915
916
VIII. Applied issues “points physionomiques” (applying mainly to aspects of facial expression) (Fischer 1995). Using this notation system, Bébian presents a dictionary of signs that is organized in such a way as to facilitate independent learning on the part of the students. In addition to serving as an educational tool, the Mimographie served as a way of standardizing and recording LSF signs, which Bébian hoped would lead to further development and serious study of the language itself (Fischer 1993). Although the primary focus of Bébian’s work was not exclusively linguistic, his Mimographie is an early and significant contribution to the linguistic study of sign languages in that his notation system represented a phonological approach to the analysis of signs. A contrasting approach to educating deaf children was adopted in Germany by Samuel Heinicke, considered by many to be the father of oral deaf education (though Lane (1984) suggests that Heinicke was a follower of Dutchman Johann Konrad Amann, a staunch oralist himself). Unlike de l’Epée and his colleagues in France, Heinicke’s primary concern was the integration of deaf children into hearing society, a goal best accomplished, he believed, by prohibiting the use of manual signs and instead focusing exclusively on speech and speech reading. These two distinct educational philosophies (manualism and oralism) spread throughout Europe in the 18th and 19th centuries; the manual approach of de l’Epée took hold in areas of Spain, Portugal, Italy, Austria, Denmark, Sweden, French Switzerland, and Russia, while the oral approach of Heinicke was adopted by educators throughout Germany and eventually in many German-speaking countries, as well as in parts of Scandinavia and Italy. A third, “mixed” method of teaching deaf children eventually arose in Austria. The first Austrian school for deaf children was founded in Vienna in 1779 after a visit by Emperor Josef II to the Paris school. In the years that followed, “daughter institutions” were founded in cities across the Austro-Hungarian Empire. While the Viennese Institute initially employed manual methods of educating deaf students, a mixed method was eventually developed, whereby written language, signs, and the manual alphabet were used to teach spoken language (Dotter/Okorn 2003). A combined system of teaching deaf children was also developed and took hold in Great Britain, beginning in 1760 when Thomas Braidwood began teaching deaf students in Scotland (though not a public school, Braidwood’s was the first school for deaf children in Europe, serving children of the wealthy). Braidwood’s approach focused on the development of articulation and the mastery of English, and while Braidwood’s method has often been misrepresented as strictly oral, the original English system utilized both speech and sign. Braidwood moved to London in 1783, where he opened another private academy for teaching deaf children, and then in 1792 the first public school for deaf children opened in London (Braidwood’s nephew, Joseph Watson served as school head). In the coming decades, additional Braidwood family members headed schools that were opened in Edinburgh and Birmingham, again with the focus on developing articulation. Throughout the first half of the 19th century, most major cities in Britain opened schools for deaf children ⫺ a total of 22 schools by 1870 (see Kyle/Woll 1985).
5.2. North America In the United States, one of the earliest attempts to organize a school for deaf children was instigated by the Bollings family of Virginia, a family in which congenital deafness was persistent across generations (Van Cleve/Crouch 1989). While the first generation
38. History of sign languages and sign language linguistics of deaf Bollings children received their schooling at the Braidwood Academy in Edinburgh, Scotland, the hearing father of the second generation of deaf Bollings children, William Bollings, sought to school his children in America. Bollings reached out to John Braidwood, a grandson of the founder of the Braidwood Academy and former head of the family school in England, who in 1812 had arrived from England with plans to open a school for deaf children in Baltimore, Maryland. Though Bollings suggested Braidwood start his endeavor by living with his family and tutoring the Bollings children, Braidwood’s ambitions were far grander ⫺ he wanted to make his fortune by opening his own school for deaf children. Though he had hopes of establishing his institution in Baltimore, by the fall of 1812 Braidwood had landed in a New York City jail, deeply in debt (likely the result of drinking and gambling, with which he struggled until his death). Braidwood asked Bollings for help, and from late 1812 to 1815 he lived with the Bollings family on their Virginia plantation, named Cobbs, where he tutored the deaf children. In March of 1815, Braidwood finally opened the first school for deaf children in America, in which at least five students were enrolled. Short-lived, the school closed in the fall of 1816 when Braidwood, again battling personal problems, disappeared from Cobbs. 1815 was also the year that the American minister Thomas Hopkins Gallaudet traveled from Hartford, Connecticut to Europe in order to learn about current European methods for educating deaf children. Gallaudet’s venture had been prompted by his associations with a deaf neighbor girl, Alice Cogswell, and was underwritten largely by her father, Dr. Mason Cogswell. From 1814 to 1817 Alice Cogswell attended a small local school where her teacher, Lydia Huntley, utilized visual communication to teach Alice to read and write alongside hearing pupils (Sayers/Gates 2008). But during these years, Mason Cogswell, an influential philanthropist, continued working toward establishing a school for deaf children in America, raising money and ultimately sending Gallaudet to Europe. Gallaudet’s first stop was Britain, where he visited the Braidwood School in London. The training of teachers of the deaf was taken very seriously in Britain, and Braidwood’s nephew Joseph Watson insisted that Gallaudet commit to a several-year apprenticeship and vow to keep the Braidwoodian techniques secret, an offer Gallaudet declined. While at the London school, however, Gallaudet met de l’Epée’s successor, Abbé Sicard, who, along with some former students, was in London giving lecture-demonstrations on the French method of educating the deaf. Deeply impressed by the demonstrations of two accomplished former students, Jean Massieu and Laurent Clerc, Gallaudet accepted Sicard’s invitation to visit the Paris school (Van Cleve/Crouch 1989). In early 1816, Gallaudet traveled to Paris and spent roughly three months at the school, observing and learning the manual techniques used there. In mid-June of 1816, Gallaudet returned by ship to America, accompanied by Laurent Clerc, the brilliant former student and then teacher from the Paris school, who had agreed to journey back home with Gallaudet. As the story goes, the journey back to America provided the opportunity for Gallaudet to learn LSF from Clerc (Lane 1984). Together with Mason Cogswell, Gallaudet and Clerc established in 1817 the Connecticut Asylum for the Education and Instruction of Deaf and Dumb Persons, the first permanent school for the deaf in America (Virginia had opened a school in the late 1780s, but it would close after only a few years). Located in Hartford, the Connecticut Asylum is now named the American School for the Deaf and is often referred to simply as “the Hartford School”.
917
918
VIII. Applied issues Clerc’s LSF, as well as certain aspects of the methodical signing used by de l’Epée and his followers, was introduced into the school’s curriculum, where it mingled with the natural sign of the deaf students and eventually evolved into what is now known as American Sign Language (ASL) (see Woodward (1978a) for a discussion of the historical bases of ASL). With Gallaudet and Clerc at the helm, the new American School was strictly manual in its approach to communication with its students; indeed, speech and speech reading were not formally taught at the school (Van Cleve/ Crouch 1989). Most of the students at the Hartford School (just seven the first year, but this number would grow considerably) were from the surrounding New England cities and rural areas and likely brought with them homesigns as a means of communication. However, a number of deaf students came to the school from Martha’s Vineyard, the island off the coast of Massachusetts that had an unusually high incidence of hearing loss amongst the population. This unique early deaf community, centered in the small town of Chilmark, flourished from the end of the 17th century into the early 20th century. Nearly all the inhabitants of this community, both deaf and hearing, used an indigenous sign language, Martha’s Vineyard Sign Language (MVSL) (Groce 1985). It is quite likely that MVSL had an influence on the development of ASL. In the years following the establishment of the Hartford school, dozens of residential schools for deaf children, each serving a wide geographic area, were founded in states across eastern and middle America. The first two that followed, schools in New York and Philadelphia, experienced difficulties that led them to turn to the Hartford School for help. As a result, these first three residential schools shared common educational philosophies, curricula, and teacher-training methods; teachers and principals were moved between schools, and in time these three schools, with the Hartford School at the helm, provided the leadership for deaf education in America (Moores 1987). Many graduates of these schools went on to teach or administrate at other schools for deaf children that were being established around the country. While there were advocates of oral education in America (Alexander Graham Bell was perhaps the most prominent, but Samuel Gridley Howe and Horace Mann as well), the manual approach was widely adopted, and ASL flourished in nearly all American schools for deaf children during the first half of the 19th century. American manualists took great interest in the sign language of the deaf, which was viewed as ancient and noble, a potential key to universal communication, and powerfully “natural” in two respects: first was the sense that many signs retained strong ties with their iconic origins, and secondly, sign language was considered the original language of humanity and thus was closer to God (Baynton 2002). While this perspective was hardly unique to America (these same notions were widely held among European philosophers from the 18th century onward), the degree to which it influenced educational practices in America during the first half of the 19th century was notable. The mid 1800s also saw the establishment and growth of deaf organizations in cities around the country, particularly in those towns that were home to deaf schools. During this time, sign language thrived, deaf communities developed, and the American Deaf culture began to take shape (Van Cleve/Crouch 1989). In 1864 the National Deaf-Mute College came into existence, with Edward M. Gallaudet (Thomas Hopkins Gallaudet’s son) as its president (the college had grown out of a school for deaf and blind children originally founded by philanthropist Amos
38. History of sign languages and sign language linguistics Kendall). Later renamed Gallaudet College, and eventually Gallaudet University, this institution quickly became (and remains to this day) the center of the deaf world in America. With its founding, the college expanded the horizons of residential deaf school graduates by giving them access to higher education in a time when relatively few young people, hearing or deaf, were provided the opportunity. The college also provided a community within which ASL could flourish. From the start, all instruction at the college was based on ASL; even when oralism eclipsed manualism in the wake of the education debates of the late 19th century (see section 6), Gallaudet College remained a bastion of sign language (Van Cleve/Crouch 1989). Because it catered to young adults and hired many deaf professors, Gallaudet was instrumental in the maintenance of ASL. Gallaudet University was then, and remains today, the main source of deaf leadership in America. On an international scale, Gallaudet has become the Mecca of the Deaf world (Lane/Hoffmeister/Bahan 1996, 128).
6. The rise of oralism in the late 19th century The second half of the 19th century saw a marked shift in educational practices, with a move away from manual and combined methods of teaching deaf children and a clear shift toward oralism. Several factors contributed to this shift, not least of which was the scientific landscape ⫺ the rise of evolutionary theory in the late 19th century, and in particular linguistic Darwinism. A strong belief in science and evolution led thinkers of the day to reject anything that seemed ‘primitive’. Crucially, in this context, sign language was viewed as an early form of language from which a more advanced, civilized, and indeed superior oral language had evolved (Baynton 1996, 2002). The shift to oralism that took place throughout the Western world was viewed as an evolutionary step up, a clear and natural move away from savagery (Branson/Miller 2002, 150). Additionally, because Germany represented (in the minds of many) science, progress, and the promise of the future, many educators considered the German oral method an expression of scientific progress (Facchini 1985, 358). The latter half of the 19th century also saw the emergence of universal education, and with this came a critical examination of the private and charity-based systems that had dominated deaf education. Many early schools for deaf children (including de l’Epée’s in Paris) were largely mission-oriented, charity institutions that focused on providing deaf people with education in order to “save” them. It was not uncommon in both Europe and America for deaf schools to employ former students as instructors; for example, at mid-century, more than 40 percent of all teachers in American schools were deaf (Baynton 1996, 60). This was to change as the professionalization of the teaching field and a concerted focus on the teaching of speech pushed deaf teachers out of the classroom. One additional factor that contributed to the rise of oralism was the shifting political ideology of nations, in general, toward a focus on assimilation and unification within individual countries. In Italy, for example, signs were forced out of the schools in an effort to unify the new nation (Radutzky 1993). Likewise, French politicians worked to unify the French people and in turn forced deaf people to use the national (spoken)
919
920
VIII. Applied issues language (Quartararo 1993). Shifting political landscapes were also at play across the Atlantic where, during the post-Civil War era, American oralists likened deaf communities to immigrant communities in need of assimilation; sign language, it was argued, encouraged deaf people to remain isolated, and was considered a threat to national unity (Baynton 1996). The march toward oralism culminated in an event that is central to the history of deaf education and, by extension, to the history of sign languages ⫺ the International Congress on the Education of the Deaf, held in Milan, Italy in 1880. While various European countries and the United States were represented at the convention, the majority of the 164 participants were from Italy and France, most were ardent supporters of oral education, and all but one (American James Denison) were hearing (Van Cleve/Crouch 1989, 109 f.). With only six people voting against it (the five Americans in attendance and Briton Richard Elliott), members of the Congress passed a resolution that formally endorsed pure oralism and called for the rejection of sign languages in schools for deaf children. In subsequent years, there were marked changes in political and public policy concerning deaf education in Europe. The consensus now was to promote oralism; deaf teachers were fired, and signing in the schools was largely banned (Lane 1984). In reality, there was continued resistance to pure oralism and signing continued to be used, at least in some capacity, in some schools, and without question, sign languages continued to flourish in deaf communities around Europe and North America (Padden/Humphries 1988; Branson/Miller 2002). This shift toward exclusively oral education of deaf children likely contributed to sign languages being considered devoid of value, a sentiment that persisted for most of the following century. Had the manual tradition of de l’Epée and his followers not been largely stamped out by the Milan resolution, scholarly interest in sign languages might have continued, and there might well have been earlier recognition of sign languages as full-fledged human languages by 20th century linguists. Historically, within the field of linguistics, the philological approach to language (whereby spoken languages were considered corrupt forms of written languages) had given way by this time to a focus on the primacy of spoken language as the core form of human language. This contributed to a view of sign languages as essentially derived from spoken languages (Woll 2003). In addition, the Saussurean notion of arbitrariness (where the link between a linguistic symbol and its referent is arbitrary) seemed to preclude sign languages, which are rich in iconicity, from the realm of linguistic study (for discussion of iconicity, see chapter 18). And so, by the first half of the 20th century, the prevailing view, both in the field of education and in the field of general linguistics, was that sign languages were little more than crude systems of gestures. American structuralist Leonard Bloomfield, for example, characterized sign languages as “merely developments of ordinary gestures” in which “all complicated or not immediately intelligible gestures are based on the conventions of ordinary speech” (Bloomfield 1933, 39). The “deaf-and-dumb” language was, to Bloomfield, most accurately viewed as a “derivative” of language.
7. Sign language linguistics: a discipline is born Sign language linguistics is often considered to be a discipline that can trace its starting point back to the work of one scholar, indeed to one monograph. While scholarly
38. History of sign languages and sign language linguistics interest in sign languages predated this period (particularly during the Enlightenment and into the first half of the 19th century), the inauguration of modern linguistic research on deaf sign languages took place in 1960.
7.1. Sign language research in North America 7.1.1. The pioneering work of William C. Stokoe The first modern linguistic analysis of a sign language was published in 1960 ⫺ Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf. The author was William C. Stokoe, Jr., a professor of English at Gallaudet College, Washington DC, the only college for the deaf in the world. Before arriving at Gallaudet, Stokoe had been working on various problems in Old and Middle English, and had come across George Trager and Henry Lee Smith’s An Outline of English Structure (1951). The procedural methods for linguistic analysis advocated by Trager and Smith made a lasting impression on Stokoe; as he was learning the basics of sign language at Gallaudet and, more importantly, watching his deaf students sign to each other, Stokoe noticed that the signs he was learning lent themselves to analysis along the lines of minimal pairs (see Stokoe 1979). This initial observation led Stokoe to explore the possibility that signs were not simply iconic pictures drawn in the air with the hands, but rather were organized symbols composed of discrete parts. Stokoe spent the summer of 1957 studying under Trager and Smith at the Linguistic Society of America sponsored Linguistics Institute held in Buffalo, NY, and then returned to Gallaudet and began working on a structural linguistic analysis of ASL. In April of 1960, Sign Language Structure appeared in the occasional papers of the journal Studies in Linguistics, published by the Department of Anthropology and Linguistics at the University of Buffalo, New York. Two main contributions emerged from Stokoe’s seminal monograph. First, he presented an analysis of the internal structure (i.e. the phonology) of individual signs; the three primary internal constituents identified by Stokoe were the tabula (position of the sign), the designator (hand configuration), and the signation (movement or change in configuration). This analysis of the abstract sublexical structure of signs illustrated that the signs of sign languages were compositional in nature. The second major contribution of the 1960 monograph was the transcription system Stokoe proposed (subsequently referred to as “Stokoe notation”). Prior to the publication of Sign Language Structure, there existed no means of writing or transcribing the language used by members of the American deaf community; individual signs had been cataloged in dictionaries through the use of photographs or drawings, often accompanied by written English descriptions of the gestures. Stokoe notation provided a means of transcribing signs, and in so doing helped illuminate the internal structure of the language (see chapter 43, Transcription). In the years directly following the publication of the monograph, Stokoe continued developing his analysis of ASL. With the help of two deaf colleagues, Carl Croneberg and Dorothy Casterline, Stokoe published the first dictionary of ASL in 1965, A Dictionary of American Sign Language on Linguistic Principles (DASL). It is an impressive work, cataloging more than 2000 different lexical items and presenting them according
921
922
VIII. Applied issues to the linguistic principles of the language. Stokoe’s two early works formed a solid base for what was to become a new field of research ⫺ sign language linguistics. Stokoe’s initial work on the structure of ASL was not, for the most part, well received within the general linguistics community (see McBurney 2001). The message contained within the monograph ⫺ that sign languages are true languages ⫺ stood counter to the intellectual climate within the field of linguistics at that time. In the years prior to the publication of Sign Language Structure, language was equated with speech, and linguistics was defined as the study of the sound symbols underlying speech behavior. This view of linguistics, and of what constitutes a language, was not easily changed. Furthermore, Stokoe’s analysis of ASL was nested within a structuralist framework, a framework that soon fell out of favor. The 1957 publication of Noam Chomsky’s Syntactic Structures marked the beginning of a new era of linguistic theory, where the focus shifted from taxonomic description to an explanation of the cognitive representation of language. Because Chomsky’s emphasis on grammar as a cognitive capacity was nowhere to be found in the work of Stokoe, it is not entirely surprising that Stokoe’s monograph received little attention from linguists of the day. Just as Stokoe’s early work was not well received by linguists, a linguistic analysis of ASL was not something readily accepted by deaf educators and related professionals. Though ASL had been used by students outside the classroom all along, and had continued to thrive in deaf communities and deaf clubs throughout America, oralism had been standard practice in most American schools for many years, and Stokoe’s seemingly obscure and technical analysis of signs did not change that overnight. It was not until the late 1960s and early 1970s that this began to change, when many American educators, frustrated by the failure of oral methods, began investigating and considering the use of signs in the classroom. The result was an eventual shift by educators to a combined approach where sign and speech were used together (an approach which previously had been used in many situations where deaf and hearing people needed to communicate with each other). In the United States, the 1960s was dominated by issues of civil rights, equality, and access. This period also saw a shift in the overall perception of deaf people in America; a changing attitude toward disabled Americans and an increasing articulateness and visibility of deaf leaders brought about a new appreciation for and acceptance of the language of the deaf community. Throughout the 1970s and 1980s, because of educators’ attitudes toward ASL, a version of the combined method came into use; referred to as Simultaneous Communication, this system consisted of speaking and signing (ASL signs in English word order) at the same time. This combination of signs and spoken words came to be used in most American schools and programs serving deaf children, representing a marked change from strictly oral education. (In more recent years, as a more positive view of ASL developed, a bilingual approach to the education of deaf children, in which ASL is considered the first language of the deaf child and English is learned as a second language, primarily through reading and writing, has taken hold in some schools across America as well as many other countries around the globe.)
7.1.2. The development of the discipline Despite the fact that his early work was not acknowledged or accepted by linguists or educators, Stokoe continued his research on the linguistics of ASL and inspired many others to join him (see Armstrong/Karchmer/Van Cleve (2002) for a collection of es-
38. History of sign languages and sign language linguistics says in honor of Stokoe). Over the course of the next few decades, a growing number of scholars turned their attention to sign languages. In 1970, the Laboratory for Language and Cognitive Studies (LLCS) was established at the Salk Institute for Biological Studies in San Diego, under the directorship of Ursula Bellugi (the laboratory’s name was eventually changed to Laboratory for Cognitive Neuroscience). In the years since its founding, this laboratory has hosted a number of researchers who have conducted an impressive amount of research on the grammar, acquisition, and processing of ASL, both at the LLCS and at other institutions across North America. Among the researchers involved in the lab in the early years were Robbin Battison, Penny Boyes Braem, Karen Emmorey, Susan Fischer, Nancy Frishberg, Harlan Lane, Ella Mae Lentz, Scott Liddell, Richard Meier, Don Newkirk, Elissa Newport, Carlene Pederson, Laura-Ann Petitto, Patricia Siple, and Ted Supalla (see Emmorey/Lane (2000), a Festschrift honoring the life and work of Ursula Bellugi and Edward Klima, her husband and colleague, for original contributions by many of these researchers). Also in San Diego, graduate students of linguistics and psychology at the University of California researched various aspects of ASL structure; the most well known, perhaps, is Carol Padden, a deaf linguist whose 1983 PhD dissertation (published in 1988) was and continues to be an influential study of the morphosyntax of ASL. In 1971, the Linguistics Research Lab (LRL) was established at Gallaudet College. Stokoe served as its director and continued to work, along with a number of other researchers, on the linguistic analysis of ASL. Although the LRL closed its doors in 1984, many researchers who were involved there went on to do important work in the field, including Laura-Ann Petitto, Carol Padden, James Woodward, Benjamin Bahan, MJ Bienvenu, Susan Mather, and Harry Markowicz. The following year, 1972, Stokoe began publishing the quarterly journal Sign Language Studies. Although publication was briefly suspended in the 1990s, for nearly 40 years, Sign Language Studies (currently published by Gallaudet University Press and edited by Ceil Lucas) has served as a primary forum for the discussion of research related to sign languages. 1972 also saw the publication of one of Stokoe’s later influential works, Semiotics and Human Sign Languages (Stokoe 1972). The first linguistics PhD dissertation on ASL was written at Georgetown University in 1973 by James Woodward, one of the researchers who had worked with Stokoe in his research lab. Since that time, hundreds of theses and dissertations have been written on sign languages, representing all the major subfields of linguistic analysis. Also in 1973, a section on sign language was established at the annual conference of the Linguistic Society of America (LSA), signaling a broadened acceptance of sign languages as legitimate languages. At the 1973⫺74 LSA meeting, members of the LLCS presented research on the phonological structure of ASL and its course of historical change into a contrastive phonological system. Henceforth, research on ASL began to have an impact on the general linguistics community (Newport/Supalla 2000). In 1974, the first conference on sign language was held at Gallaudet, and in 1979, the Department of Linguistics was established there. At present, there are several other academic departments around the United States that have a particular focus on ASL, including Boston University, University of Arizona, Rutgers University, University of Rochester, University of California at San Diego, University of Texas at Austin, University of New Mexico, and Purdue University. In 1977, Harlan Lane founded the
923
924
VIII. Applied issues Language Research Laboratory at Northeastern University, Boston, Massachusetts. Initially working with deaf research assistants Marie Philip and Ella Mae Lentz, Lane’s laboratory conducted research on the psycholinguistics of sign language into the early 1990s (other collaborators who have gone on to make significant contributions to the field included François Grosjean, Judy Shepard-Kegl, Howard Poizner, and Trude Schermer). Though much of the initial research on ASL focused on phonological structure, throughout the 1970s and 1980s an increasing number of American researchers published works that contributed to a broader understanding of the linguistic structure of ASL. Among the areas explored during this period were: complex word structure, particularly verbal constructions (Fischer 1973; Padden 1983); derivational processes (Supalla/Newport 1978); verbs of motion and location (Supalla 1982); syntactic structure, including non-manual markers (Liddell 1980); and historical variation and change (Frishberg 1975; Woodward 1976). Aside from Stokoe’s 1960 monograph, the single most influential work in the emerging discipline of sign language linguistics and, more specifically, in the acceptance of sign languages as real languages, worthy of linguistic study, was Edward Klima and Ursula Bellugi’s The Signs of Language (SOL). SOL had begun as a collection of working papers at the Salk Institute’s LLCS, but it was developed into a full-length book and published by Harvard University Press in 1979. Although this volume was, and still is, referred to as “the Klima and Bellugi text”, it is in fact a summary of research conducted by a number of scholars throughout the 1970s, all of whom worked with Bellugi at Salk. In addition to discussing the internal structure of signs, SOL contains analyses of, among other things, the origin and development of ASL, historical changes in the language, the nature of iconicity, the grammatical processes at work, the coding and processing of signs, and wit and poetry in ASL. The text is particularly thorough in its treatment of the various aspects of ASL morphology. In contrast to Sign Language Structure, SOL was widely read by linguists and scholars from related disciplines. SOL was, and still is, viewed as a groundbreaking contribution to the field, one that demonstrates how human language does not have to be spoken, and how the human capacity for language is more profound than the mere capacity for vocal-auditory communication. While the changing theoretical landscape of Stokoe’s time worked against him, it worked to the advantage of Klima and Bellugi; the paradigmatic shift in linguistic theory that followed from the work of Chomsky (1957, 1965) created a new space for and interest in the study of sign languages (see McBurney 2001). If the universal principles that are proposed to exist in all languages are, in fact, universal, then they should also be able to account for language in another modality. While most deaf Anglophone residents of Canada use ASL (the varieties of LSF and British Sign Language brought by immigrants in the 19th century have largely disappeared), Langue des Signes Québécoise (LSQ), which is historically related to LSF, is used primarily by deaf people in French-speaking Quebec (Winzer 1987). In the late 1970s, a group of deaf researchers, headed by Paul Bourcier and Julie Elaine Roy, began working on a dictionary of LSQ at the Institut Raymond-Dewar in Montréal. Also at this time, Rachel Mayberry conducted some preliminary research on the comprehension of LSQ. In 1983, Laura-Ann Petitto established the Cognitive Science Laboratory for Language, Sign Languages, and Cognition at McGill University in Mon-
38. History of sign languages and sign language linguistics tréal. Petitto had previously done studies on manual babbling in children exposed to sign languages, a research program begun when she worked with Ursula Bellugi at the Salk Institute. Once established at McGill, Petitto focused on investigating the phonological structure, acquisition, and neural representation of LSQ. Deaf artist and research assistant Sierge Briere was a key collaborator in Petitto’s research into the linguistics of LSQ, and graduate student Fernande Charron pursued developmental psycholinguistic studies in the lab as well. In 1988, Colette Dubuisson of Université du Québec à Montréal received a research grant from the Social Sciences and Humanities Research Council of Canada (SSHRC) and began working on the linguistics of LSQ; collaborators in this group included Robert Fournier, Marie Nadeau, and Christopher Miller.
7.2. Sign language research in Europe The earliest work on sign communication in Europe was Bernard Tervoort’s 1953 dissertation (University of Amsterdam), Structurele Analyse van Visueel Taalgebruik Binnen een Groep Dove Kinderen (“Structural Analysis of Visual Language Use in a Group of Deaf Children”) (Tervoort 1954). While Tervoort is considered one of the founding fathers of international sign language research, his 1953 thesis has not, for the most part, been considered a modern linguistic analysis of a sign language because the signing he studied was not a complete and natural sign language. The Dutch educational system forbade the use of signs in the classroom, so most of the signs the children used were either homesigns or signs developed amongst the children themselves. The communication he studied did not, therefore, represent Sign Language of the Netherlands (NGT), a fully developed natural sign language. Nevertheless, Tervoort treated the children’s signing as largely linguistic in nature, and his descriptions of the signing suggest a complex structural quality (see also Tervoort 1961). Research on the linguistics of natural sign languages emerged later in Europe than it did in the United States. Two factors contributed to this (Tervoort 1994). First, most European schools for deaf children maintained a strictly oral focus longer than did schools in North America; signing was discouraged, and sign languages were not understood to be “real” languages worthy of study. Second, the rise in social status and acceptance of sign language that deaf Americans enjoyed beginning in the late 1960s was not, for the most part, experienced by deaf Europeans until later. European sign language research began in the 1970s, and became established most quickly in Scandinavia. While Swedish Sign Language (SSL) and Finnish Sign Language (FinSL) dictionaries appeared in the early 1970s, it was in 1972 that formal research on the linguistics of SSL began at the Institute of Linguistics, University of Stockholm, under the direction of Brita Bergman (Bergman 1982). Other linguists involved in early work on SSL included Inger Ahlgren and Lars Wallin. Research on the structure of Danish Sign Language (DSL) began in 1974, with early projects being initiated by Britta Hansen at the Doeves Center for Total Communication (KC) in Copenhagen. Dr. Hansen collaborated with Kjær Sørensen and Elisabeth EngbergPedersen to publish the first comprehensive work on the grammar of DSL in 1981, and since that time, there has been a considerable amount of research into various aspects of DSL. In neighboring Scandinavian countries, research on the structure of
925
926
VIII. Applied issues other indigenous sign languages began in the late 1970s, when Marit Vogt-Svendsen began working on Norwegian Sign Language (NSL) at the Norwegian Postgraduate College of Special Education, near Oslo, and Odd-Inge Schröder began a research project at the University of Oslo (Vogt-Svendsen 1983; Schröder 1983). While dictionary work began in the 1970s, formal investigations into the structure of FinSL were begun in 1982, when the FinSL Research Project began, under the direction of Fred Karlsson, at the Department of General Linguistics at the University of Helsinki, with some of the earliest research conducted by Terhi Rissanen (Rissanen 1986). In Germany, scholars turned their attention to German Sign Language (DGS) in 1973 when Siegmund Prillwitz, Rolf Schulmeister, and Hubert Wudtke started a research project at the University of Hamburg (Prillwitz/Leven 1985). In 1987, Dr. Prillwitz founded the Centre (now Institute) for German Sign Language and Communication of the Deaf at the University of Hamburg. Several other researchers contributed to the early linguistic investigations, including Regina Leven, Tomas Vollhaber, Thomas Hanke, Karin Wempe, and Renate Fischer. One of the most important projects undertaken by this research group was the development and release, in 1987, of the Hamburg Notation System (HamNoSys), a phonetic transcription system developed in the tradition of Stokoe’s early notation (see chapter 43, Transcription). A second major contribution is the International Bibliography of Sign Language; a searchable online database covering over 44,000 publications related to sign language and deafness, the bibliography is a unique and indispensable research tool for sign linguists. The first linguistics PhD thesis on a European sign language (British Sign Language, or BSL) was written by Margaret Deuchar (Stanford University, California, 1978). Following this, in the late 1970s and early 1980s, there was a marked expansion of research on the linguistics of BSL. In 1977, a Sign Language Seminar was held at the Northern Counties School for the Deaf at Newcastle-upon-Tyne; co-sponsored by the British Deaf Association and attended by a wide range of professionals as well as researchers from the Swedish Sign Linguistics Group and Stokoe from Gallaudet, this seminar marked a turning point after which sustained research on BSL flourished (Brennan/Hayhurst 1980). In 1978, the Sign Language Learning and Use Project was established at the University of Bristol (James Kyle, director, with researchers Bencie Woll, Peter Llewellyn-Jones, and Gloria Pullen, the deaf team member and signing expert). This project eventually led to the formation of the Centre for Deaf Studies (CDS) at Bristol, co-founded in 1980 by James Kyle and Bencie Woll. Early research here focused on language acquisition and coding of BSL, but before long scholars at CDS were exploring all aspects of the structure of BSL (Kyle/Woll 1985). Spearheaded by Mary Brennan with collaboration from Martin Colville and deaf research associate Lilian Lawson, the Edinburgh BSL Research Project (1979⫺84) focused primarily on the tense and aspect system of verbs and on developing a notation system for BSL. The Deaf Studies Research Unit at the University of Durham was established in 1982, and researchers there worked to complete a dictionary of BSL, originally begun by Allan B. Hayhurst in 1971 (Brien 1992). In the mid 1970s, Bernard Mottez and Harry Markowicz at the Centre National de la Recherché Scientifique (CNRS) in Paris began mobilizing the French deaf community and working toward the formal recognition and acceptance of LSF. While the initial focus here was social, research on the structure of LSF began soon after. Two of the first researchers to focus specifically on linguistic aspects of LSF were Christian
38. History of sign languages and sign language linguistics Cuxac (Cuxac 1983) and Daniele Bouvet in the late 1970s. Another important early scholar was Paul Jouison who, in the late 1970s, worked with a group of deaf individuals in Bordeaux to develop a notation system for LSF (described in Jouison 1990), and went on to publish a number of important works. In Italy, Virginia Volterra and Elena Pizzuto were among the first to do research on Italian Sign Language (LIS) in the late 1970s and early 1980s. Working at the CNR Institute of Psychology in Rome, these researchers conducted a wide range of investigations into various linguistic, psycholinguistic, educational, and historical aspects of LIS (Volterra 1987). Also in the late 1970s, Penny Boyes Braem, who had worked alongside Ursula Bellugi at the LLCS, started a research center in Basel, Switzerland, and began investigating the structure of Swiss-German Sign Language (Boyes Braem 1984). Modern linguistic research on NGT began in the early 1980s. Following the publication of his 1953 thesis, Bernard Tervoort continued to publish works on language development and sign communication in deaf children. In 1966, Tervoort became a full professor at the University of Amsterdam, where he helped establish the Institute for General Linguistics. Tervoort’s contribution to sign language studies is both foundational and significant; his thesis was a major step toward understanding the language of deaf children, he inspired and mentored many scholars, and his later work paved the way for linguistic research projects into NGT (see Kyle 1987). In 1982, the first formal sign language research group was established at the Dutch Foundation for the Deaf and Hard of Hearing Child (NSDSK), with support from Tervoort at the University of Amsterdam. The initial focus here was research into the lexicon of NGT; Trude Schermer served as the project director for the development of the first NGT dictionary, and worked in collaboration with Marianne Stroombergen, Rita Harder, and Heleen Bos (see Schermer 2003). Research on various grammatical aspects of NGT was also conducted at the NSDSK, and later on by Jane Coerts and Heleen Bos at the University of Amsterdam as well. In 1988, Anne Baker took over the department chair from Tervoort, and has had a substantial impact on sign language research in the Netherlands since then. Early research on Russian Sign Language (RSL) took place in the mid 1960s, at the Institute of Defectology in Moscow (now the Scientific-Research Institute of Corrective Pedagogy), where in 1969 Galina Zaitseva completed her PhD thesis on spatial relationships in RSL. While additional documentation of and research on the language has been slow in coming, the founding in 1998 of the Centre for Deaf Studies in Moscow, with the late Zaitseva as the original academic director, has brought about a renewed interest in linguistic research. Over the past few decades, sign language research has continued to flourish across much of Europe, and the number of individual researchers and research groups has grown considerably. While some European sign languages have received more scholarly attention than others, most natural sign languages found in Europe have been subject to at least some degree of linguistic investigation. Here the focus has been on discussing the earliest work on individual European sign languages; some more recent developments will be discussed in section 8.
7.3. Sign language research in other parts of the world Research on the linguistics of natural sign languages outside North America and Europe is, for the most part, less well-established, though certainly underway in a number
927
928
VIII. Applied issues of other countries around the globe. Dictionaries have been compiled for many of these sign languages, and additional research into the linguistic structure of some has been undertaken.
7.3.1. Asia The early 1970s marked the start of research on Israeli Sign Language (Israeli SL), with linguistic investigations and dictionary development instigated by Izchak Schlesinger and Lila Namir (Schlesinger/Namir 1976). In the early 1990s, intensive descriptive and theoretical linguistic research began at the University of Haifa, which led to the establishment in 1998 of the Sign Language Research Laboratory, with Wendy Sandler as director (see Meir/Sandler 2007). A 1982 dissertation by Ziad Salah Kabatilo provided the first description of Jordanian Sign Language but there was no further research on the language until Bernadet Hendriks began investigating its structure in the mid 2000s (Hendriks 2008). A Turkish Sign Language dictionary was published in 1995, but research into the structure of the language did not begin until the early 2000s when Ulrike Zeshan, Aslı Özyürek, Pamela Perniss, and colleagues began examining the phonology, morphology, and syntax of the language. In the early 1970s, research on Japanese Sign Language (NS) was initiated. The Japanese Association of the Deaf published a five-volume dictionary in 1973, and Fred Peng published early papers on a range of topics. Other early researchers included S. Takemura, S. Yamagishi, T. Tanokami, and S. Yoshizawa (see Takashi/Peng 1976). The Japanese Association of Sign Language was founded in 1974, while academic conferences have been held and proceedings published beginning in 1979. In the mid 1970s, research on Indian Sign Language was undertaken by Madan Vasishta, in collaboration with Americans James Woodward and Kirk Wilson (Vasishta/ Woodward/Wilson 1978). More recent research by Ulrike Zeshan (2000) has revealed that the sign languages of India and Pakistan are, in fact, varieties of the same language, which she terms Indo-Pakistani Sign Language (IPSL). Initial documentation and dictionary work on the Pakistani variety of ISPL was done in the late 1980s by the ABSA (Anjuman-e-Behbood-e-Samat-e-Atfal) Research Group which was established in 1986; in more recent years, the Pakistan Association of the Deaf has established a Sign Language Research Group dedicated to analyzing sign language in Pakistan. Chinese Sign Language was also the subject of scholarly interest in the mid 1970s, with a dictionary published in 1977 and initial research conducted by Shun-Chiu Yau (1991). Recently, a comprehensive dictionary of Hong Kong Sign Language (HKSL) has become available, based on research led by Gladys Tang and colleagues at the recently established Centre for Sign Language and Deaf Studies (CSLDS) at the Chinese University of Hong Kong (Tang 2007). Since 2003, CSLDS has initiated a comprehensive Asia-Pacific sign linguistics research and training program. In Taiwan, early attempts to document Taiwanese Sign Language (TSL) began in the late 1950s and continued throughout the 1960s and early 1970s (Smith 2005), but it was not until the late 1970s that TSL came under closer study when Wayne Smith began publishing a series of papers examining several aspects of the language. In 1981, following several years of research on TSL, Chao Chienmin published Natural Sign Language (rev. ed.
38. History of sign languages and sign language linguistics Chao et al. 1988). Julia Limei Chen researched a range of TSL features in the 1980s, a dictionary was published in 1983, and Wayne Smith wrote a dissertation on the morphological structure of TSL (Smith 1989). The 1990s saw continued investigations into the structure of TSL, including a collection of papers by linguist Jean Ann examining the phonetics and phonology of the language. A dictionary of Korean Sign Language (KSL) was published in 1982, but it was only nearly two decades later that the language received scholarly attention: J. S. Kim worked on gesture recognition, and Sung-Eun Hong (2003, 2008) on classifiers and verb agreement. A Filipino Sign Language dictionary was published by Jane MacFadden in 1977; since that time, there has been additional research and publications on the language, led by Lisa Martinez. Sign Language in Thailand came under study in the mid 1980s, with a dictionary published by Suwanarat and Reilly in 1986, and initial research on spatial locatives. In the late 1990s and early 2000s, James Woodward published several papers on the historical relationships between sign languages in Thailand and Viet Nam. Short dictionaries of Cambodian Sign Language have been published as part of the Deaf Development Program in Phnom Penh, and recently work on Burmese Sign Language has begun (Justin Watkins, personal communication).
7.3.2. South America The first South American sign language to be documented was Brazilian Sign Language (LSB), which is used by urban deaf communities in the country. An early volume appeared in 1875; inspired by work coming out of France at the time, deaf student Flausino José da Gama compiled a dictionary organized by category of sign. A second illustrated dictionary appeared in 1969, authored by American missionary Eugene Oates. Following this early work, it was not until the early 1980s that the structure of LSB came under study, with scholarly analysis on a wide range of topics conducted by Lucinda Ferreira-Brito and Harry Hoemann, among others. More recently, Ronice Müller de Quadros wrote a 1999 dissertation on the syntactic structure of Brazilian Sign Language, and has gone on to establish a Deaf Studies program at Santa Caterina University in Florianópolis. Relatively early availability of studies led to LSB being among the first languages to be included in early cross-linguistic comparisons. When compared to other South American sign languages, Argentine Sign Language (LSA) is relatively well researched. Beginning in the early 1990s, Maria Ignacia Massone and colleagues began investigations into a wide range of topics, including kinship terms, number, gender, grammatical categories, word order, tense and modality, nonmanuals, and phonetic notation of LSA (Massone 1994). A dictionary was also compiled and published in 1993. In 1991, a volume on the syntactic and semantic structure of Chilean Sign Language (ChSL) was published by Mauricio Pilleux and colleagues at Universidad Austral de Chile. A dictionary came out that same year, and since that time, there have been several studies examining aspects such as negation, spatial locatives, and psycholinguistic processing of ChSL. Colombian Sign Language (CoSL) came under study in the early 1990s, with a dictionary published in 1993. A deaf education manual published that same year in-
929
930
VIII. Applied issues cluded a discussion on linguistic descriptions of some aspects of CoSL. In the late 1990s, Nora Lucia Gomez began research on the morphology and phonology of the language, and Alexander Oviedo began research on classifier constructions. Oviedo published a large and comprehensive volume on the grammar of CoSL in 2001. Research into the sign languages of other South American countries has yet to be fully established, though dictionaries have been compiled for several, including Uruguay (1988), Paraguay (1989), and Guyana (2001). A dictionary for Venezuelan Sign Language was published in 1983, and an unpublished manuscript dated 1991 provides a linguistic analysis of the language.
7.3.3. Oceania A significant body of research, dating back to the early 1900s, exists on the sign languages traditionally used by Aboriginal communities in some parts of Australia. These languages are used as alternatives to spoken languages, often in connection with taboos concerning speech between certain members of the community or at particular times (e.g. during a period of mourning). LaMont West, an American linguist, produced a 1963 report on his and others’ research on Australian Aboriginal sign languages, and English scholar Adam Kendon turned his attention to these languages in the mid 1980s (Kendon 1989) (see chapter 23 for further discussion). Research on the lexicon and structure of Australian Sign Language (Auslan) began in the early 1980s. Trevor Johnston’s 1989 doctoral dissertation was the first full-length study of the linguistics of Auslan. Included in the thesis was a dictionary, and Johnston’s continued work with colleagues Robert Adam and Adam Schembri led to the publication in 1998 of a new and comprehensive dictionary of Auslan. A recent collaboration with Adam Schembri has produced a comprehensive introduction to the language (Johnston/Schembri 2007). Teaching materials and several academic publications have also been produced by Jan Branson and colleagues at the National Institute for Deaf Studies, established in 1993 at La Trobe University. Studies of New Zealand Sigh Language (NZSL) began in the early 1970s with an unpublished thesis by Peter Ballingall that examined the sign language of deaf students and concluded that it is a natural language. In the early 1980s, American Marianne Collins-Ahlgren began research on NZSL, which culminated in her 1989 thesis (Victoria University, Wellington) comprising the first full description of the grammar. In 1995, Victoria University established the Deaf Studies Research Unit (DSRU), and researchers continued investigations into the lexicon and grammar of NZSL. The first major project of the DSRU was the development and publication in 1997 of a comprehensive dictionary of NZSL. Currently under the direction of David McKee, research is ongoing at DRSU, including a large study examining sociolinguistic variation in NZSL.
7.3.4. Africa While there are at least 24 sign languages in Africa (Kamei 2006), sign language research is relatively sparse in the region. This appears to be changing, as evidenced by
38. History of sign languages and sign language linguistics a recent Workshop on Sign Language in Africa that was held in 2009 in conjunction with the 6 th World Congress of African Linguistics in Leiden (the Netherlands). In North Africa, dictionaries have been compiled for Libyan Sign Language (1984), Egyptian Sign Language (1984), and Moroccan Sign Language (1987). Sign language research began in West Africa in the mid 1990s when linguist Constanze Schmaling began studying Hausa Sign Language, the sign language used by members of the deaf community in areas of Northern Nigeria. Her 1997 dissertation (published in 2000) provides a descriptive analysis of the language. In 2002, linguist Victoria Nyst began studying Adamorobe Sign Language, an indigenous sign language used in an eastern Ghana village that has a very high incidence of deafness. Nyst completed and then published a dissertation containing a sketch grammar of the language (Nyst 2007). Currently with the Leiden University Centre for Linguistics, Nyst has initiated a research project to document and describe another West African Sign Language, Malinese Sign Language, for which a dictionary was compiled in 1999. With the exception of a dictionary for Congolese Sign Language (1990) and a recentlypublished dictionary of Rwandan Sign Language (2009), it appears that there has been limited, if any, linguistic research on sign languages used in Central African countries. An early description of East African signs appeared in the journal Sign Language Studies in 1977, and a paper on Ethiopian Sign Language was presented at the 1979 World Congress of the World Federation of the Deaf. A dictionary of Kenyan Sign Language was made available in 1991, and in 1992, a linguistic study was undertaken by Philomen Akach, who examined sentence formation in Kenyan Sign Language. While there has been additional interest in the language, most of it has focused on sign language development and education in the country. One exception is a 1997 paper by Okombo and Akach on language convergence and wave phenomena in the growth of Kenyan Sign Language. The Tanzanian Association of the Deaf published a dictionary in 1993. A Ugandan Sign Language dictionary was published in 1998, and the following year a diploma thesis (Leiden University, the Netherlands) by Victoria Nyst addressed handshape variation in Ugandan Sign Language. Finally, an Ethiopian Sign Language dictionary was published in 2008, and Addis Ababa University has recently launched an Ethiopian Sign Language and Deaf Culture Program, with one of the aims being to increase collaborative research on the language. In the mid 1970s, Norman Nieder-Heitmann began researching sign languages in South Africa and in 1980, a dictionary was published. A second dictionary was published in 1994, the same year that a paper by C. Penn and Timothy Reagan appeared in Sign Language Studies, exploring lexical and syntactic aspects of South African Sign Language. More recently, Debra Aarons and colleagues have investigated a wide range of topics, including non-manual features, classifier constructions and their interaction with syntax, and the sociolinguistics of sign language in South Africa. Also, a research project was recently launched to investigate the structural properties of the sign languages used by different deaf communities in South Africa in order to determine if there is one unified South African Sign Language or many different languages.
8. The establishment of the discipline One indication that sign language linguistics has become a mature field of study is its professionalization. Over the years, there have been several different series of confer-
931
932
VIII. Applied issues ences or symposia dealing with sign language linguistics, nearly all of which were followed by the publication of conference proceedings. The volumes themselves contain an impressive body of research that formed the core of the literature for the discipline. The earliest of these was the National Symposium on Sign Language Research and Training. Primarily a meeting of American researchers and educators, the NSSLRT held its first symposium in 1977 (Chicago, IL, USA), and others followed in 1978 (Coronado, CA, USA), 1980 (Boston, MA, USA) and 1986 (Las Vegas, NV, USA). In the summer of 1979, two international gatherings of sign language linguists were held in Europe. In June, the first International Symposium on Sign Language Research (ISSLR), organized by Inger Ahlgren and Brita Bergman, was held in Stockholm, Sweden. Then in August of 1979, Copenhagen was the site of the NATO Advanced Study Institute on Language and Cognition: Sign Language Research, which was organized by Harlan Lane, Robbin Battison, and François Grosjean. The ISSLR held a total of five symposia, with additional meetings in 1981 (Bristol, England), 1983 (Rome, Italy), 1987 (Lappeenranta, Finland), and 1992 (Salamanca, Spain). In contrast to the ISSLR, which brought together researchers from North America and Europe, the European Congress on Sign Language Research (ECSL) focused on research being conducted on European sign languages. Four meetings were held by this congress: 1982 (Brussels, Belgium), 1985 (Amsterdam, the Netherlands), 1989 (Hamburg, Germany), and 1994 (Munich, Germany). The International Conference on Theoretical Issues in Sign Language Research (TISLR) first convened in 1986 (Rochester, NY, USA), and was followed by (largely) biannual conferences in 1988 (Washington, DC, USA), 1990 (Boston, MA, USA), 1992 (San Diego, CA, USA), 1996 (Montreal, Canada), 1998 (Washington, DC, USA), 2000 (Amsterdam, the Netherlands), 2004 (Barcelona, Spain), 2006 (Florianopolis, Brazil), and 2010 (West Lafayette, IN, USA). In 2006, the first in a yearly series of conferences aimed at broadening the international base in sign language linguistics was held in Nijmegen, the Netherlands. Originally called CLSLR (Cross-Linguistic Sign Language Research), the conference now goes by the name of SIGN. 2008 saw the SignTyp Conference on the phonetics and phonology of sign languages held at the University of Connecticut, Storrs, USA. While researchers from Europe, North America, and other countries around the world continue to hold smaller conferences, workshops and seminars, TISLR has become the primary international conference for sign language researchers. Gallaudet University Press was established in 1980 to disseminate knowledge about deaf people, their languages, their communities, their history, and their education through print and electronic media. The International Sign Linguistics Association (ISLA) was founded in 1987 to encourage and facilitate sign language research throughout the international community. Three publications came out of this organization: the newsletter Signpost, which first appeared in 1988, became a quarterly periodical in the early 1990s and was published by ISLA until 1995; The International Journal of Sign Linguistics (1990⫺1991); and The International Review of Sign Linguistics (1996, published by Lawrence Erlbaum). In 1998, John Benjamins began publishing the peer-reviewed journal Sign Language & Linguistics, with Ronnie Wilbur serving as the general editor until 2007, at which time Roland Pfau and Josep Quer assumed editorial responsibilities. ISLA folded in the late 1990s, and calls to create a new organization began at the 1998 TISLR meeting. It was replaced by the international Sign
38. History of sign languages and sign language linguistics Language Linguistics Society (SLLS), which officially began at the 2004 TISLR meeting in Barcelona. In 1989, Signum Press was created; an outgrowth of the Institute for DGS at the University of Hamburg, Signum Press publishes a wide range of books and multimedia materials in the area of sign language linguistics, and also publishes Das Zeichen, a quarterly journal devoted to sign language research and deaf communication issues. Finally, the online discussion list, SLLing-L, has been up and running since the early 1990s and is devoted to the discussion of the linguistic aspects of natural sign languages. With hundreds of subscribers around the globe, this electronic forum has become a central means of facilitating scholarly exchange and, thus, has played an important role in the growth of the discipline of sign language linguistics. As far as research is concerned, the recent establishment of a few research centers is worth noting. Established in 2006, the Deafness Cognition and Language (DCAL) Research Centre at University College London aims to study the origins, development, and processing of human language using sign languages as a model. With Bencie Woll as director, DCAL is home to a growing number of researchers in the fields of sign linguistics, psychology, and neuroscience. In early 2007, Ulrike Zeshan founded the International Centre for Sign Language and Deaf Studies (iSLanDS) at the University of Central Lancashire. The research center incorporates the Deaf Studies program (offered since 1993) but expands research and teaching to encompass an international dimension, with documentation of and research on sign languages around the world as well as the development of programs to provide higher education opportunities for deaf students from across the globe. Launched in 2008, the sign language research group at Radboud University Nijmegen, led by Onno Crasborn, conducts research on the structure and use of NGT, and is also a leading partner in SignSpeak, a European collaborative effort to develop and analyze sign language corpora with the aim of developing vision-based technology for translating continuous sign language to text.
9. Historical relationships between sign languages While a rich body of comparative research has elucidated the genetic relationships among the world’s spoken languages (the nearly 7,000 spoken languages can be divided into roughly 130 major language families), the same cannot be said for sign languages, on which much comparative research remains to be done. The historical connections between some sign languages have been explored (see, for example, Woodward 1978b, 1991, 1996, 2000; McKee/Kennedy 2000; Miller 2001; among others), but there have been only a few attempts to develop more comprehensive historical mappings of the relationships between a broad range of the world’s sign languages (Anderson 1979; Wittmann 1991). The precise number of sign languages in existence today is not known, but the Ethnologue database (16th edition, Lewis 2009) currently lists 130 “deaf sign languages”, up from 121 in the 2005 survey. The fact that the Ethnologue lists “deaf sign languages” as one language family among 133 total language families highlights the extent to which sign languages are under-researched and brings into focus the challenges involved in placing sign languages into a larger comparative historical context.
933
934
VIII. Applied issues Whereas spoken languages have evolved over thousands of years, modern sign languages have evolved very rapidly, indeed one might argue spontaneously, and many have emerged largely independently from each other, making traditional historical comparisons difficult. Notably, there is some question as to the validity and usefulness of standard historical comparative methods when attempting to determine the historical relationships between sign languages. Most researchers acknowledge that traditional comparative techniques must be modified when studying sign languages. For example, the original 200-word Swadesh list used to compare basic vocabularies across spoken languages has been modified for use with sign languages; in order to reduce the number of false potential cognates, words such as pronouns and body parts that are represented indexically (via pointing signs), and words whose signs are visuallymotivated or iconic (such as drink) have been factored out (Woodward 1978b; Pizzuto/ Volterra 1996; McKee/Kennedy 2000; Hendriks 2008). In addition, because sign languages are so young, it is necessary that researchers adapt the time scale used to calculate differences between sign languages (Woll/Sutton-Spence/Elton 2001). As scholars have begun to study the various sign languages from an historical comparative angle, a lack of documentation of the oldest forms of sign languages has made research difficult. Furthermore, it can be challenging to distinguish between relatedness due to genetic descent versus relatedness due to language contact and borrowing, which is quite pervasive among sign languages. During their emergence and growth, individual sign languages come into contact with other natural sign languages, the majority spoken language(s) of the culture, signed versions of the majority spoken language, as well as gestural systems that may be in use within the broader community (see chapter 35, Language Contact and Borrowing, for details). The extensive and multi-layered language contact that occurs can make traditional family tree classifications difficult. Analyses have shown that historical links between sign languages have been heavily influenced by, among other things, world politics and the export of educational systems (see Woll/Sutton-Spence/Elton 2001; Woll 2006). For example, the historical legacy of the Habsburg Empire has resulted in a close relationship between the sign languages of Germany, Austria, and Hungary. BSL has had a strong influence on sign languages throughout the former British Empire; after being educated in Britain, deaf children often returned to their home countries, bringing BSL signs with them. Additionally, the immigration of British deaf adults to the colonies has resulted in strong connections between BSL and Auslan, NZSL, Maritime Sign Language in Nova Scotia, as well as certain varieties of Indian and South African Sign Languages. Recent research suggests that the sign languages of Britain, Australia, and New Zealand are in fact varieties of the same sign language, referred to as “BANZL”, for British, Australian and New Zealand Sign Languages (Johnston 2003). The Japanese occupation of Taiwan has resulted in some dialects of TSL being very similar to NS. Following Japan’s withdrawal, another form of TSL has developed ⫺ one heavily influenced by the sign language used in Shanghai, brought over by immigrants from the Chinese mainland. NS, TSL, and KSL are thought to be members of the Japanese Sign Language family (Morgan 2004, 2006). The export of educational systems, often by individuals with religious or missionary agendas, has without question had an influence on the historical relationships between sign languages. Foremost among these is the French deaf education system, the export
38. History of sign languages and sign language linguistics of which brought LSF into many countries around Europe and North America. As a result, the influence of LSF can be seen in a number of sign languages, including Irish Sign Language, ASL, RSL, LSQ, and Mexican Sign Language. A similar relationship exists between SSL and Portuguese Sign Language following a Swedish deaf educator’s establishment of a deaf school in Lisbon in 1824 (Eriksson 1998). Researchers have noted that Israeli SL is historically related to DGS, having evolved from the sign language used by German Jewish teachers who, in 1932, opened a deaf school in Jerusalem (Meir/Sandler 2007). Icelandic Sign Language is historically related to DSL, the connection stemming from the fact that deaf Icelandic people were sent to Denmark for education until the early 1900s (Aldersson 2006). Some scholars hypothesize that modern ASL is the result of a process of creolization between indigenous ASL and the LSF that was brought to America by Clerc and Gallaudet in the early 1800s (Woodward 1978; Fischer 1978; but see Lupton/Salmons 1996 for a reanalysis of this view). It has been shown that ASL shares many of the sociological determinants of creoles, as well as a similar means of grammatical expression. Furthermore, evidence of restructuring at the lexical, phonological, and grammatical levels points to creolization. This line of thinking has been expanded to include a broader range of sign languages (including BSL, LSF, and RSL) that have been shown to share with creoles a set of distinctive grammatical characteristics as well as a similar path of development (Deuchar 1987; see also Meier 1984 and chapter 36, Language Emergence and Creolization). At least two sign languages that were originally heavily influenced by LSF have, in turn, had an impact on other sign languages around the globe. Irish nuns and brothers teaching in overseas Catholic schools for deaf children have led to Irish Sign Language influencing sign languages in South Africa, Australia, and India. Similarly, a much larger number of sign languages around the world have been heavily influenced by ASL through missionary work, the training of deaf teachers in developing countries, and/or because many foreign deaf students have attended Gallaudet University in Washington, DC (the world’s first and only university for deaf people) and have then returned to their home countries, taking ASL with them. ASL is unique in that, next to International Sign, it serves as a lingua franca in the worldwide deaf community, and thus has had a major influence on many sign languages around the globe. The Ethnologue (16th edition, Lewis 2009) reports that ASL is used among some deaf communities in at least 20 other countries around the world, including many countries in Africa and the English speaking areas of Canada. In fact, many of the national sign languages listed in the Ethnologue for some developing countries might best be considered varieties of ASL. Recent research examines the sign languages used in West and Central Frenchspeaking African countries and finds evidence of a creole sign language (Kamei 2006). Historically, ASL was introduced into French-speaking African countries when Dr. Andrew J. Foster, a deaf African-American and Christian missionary, began establishing schools for deaf children in 1956 (Lane/Hoffmeister/Bahan 1996). Over time, the combination of French literacy education with ASL signs has led to the emergence of Langue des Signes Franco-Africaine (LSFA), an ASL-based creole sign language. A survey of the sign languages used in 20 Eastern Europe countries suggests that, while the sign languages used in this region are distinct languages, there are two clusters of languages that have wordlist similarity scores that are higher than the bench-
935
936
VIII. Applied issues marks for unrelated languages (Bickford 2005). One cluster includes RSL, Ukrainian Sign Language, and Moldova Sign Language. A second cluster includes the sign languages in the central European countries of Hungary, Slovakia, the Czech Republic, and more marginally Romania, Poland, and Bulgaria. A rich body of lexicostatistical research has revealed that there are seven distinct sign languages in Thailand and Viet Nam, falling into three distinct language families (Woodward 1996, 2000). The first is an indigenous sign language family comprised of a single language ⫺ Ban Khor Sign Language ⫺ that developed in isolation in Northeast Thailand. A second sign language family contains indigenous sign languages that developed in contact with other sign languages in Southeast Asia, but had no contact with Western sign languages: Original Chiangmai Sign Language, Original Bangkok Sign Language, and Hai Phong Sign Language (which serves as a link between the second and third families). Finally, there exists a third sign language family comprised of “modern” sign languages that are mixtures, likely creolizations, of original sign languages with LSF and/or ASL (languages that were introduced via deaf education). This third language family includes Ha Noi Sign Language, Ho Chi Minh Sign Language, Modern Thai Sign Language, and the link language Hai Phong Sign Language (Woodward 2000). Finally, over the years, as communication and interaction between deaf people around the world has increased, a contact language known as International Sign (IS) has developed spontaneously. Formerly known as Gestuno, IS is used at international Deaf events and meetings of the World Federation of the Deaf. Recent research suggests that while IS is a type of pidgin, it is more complex than typical pidgins and its structure is more similar to full sign languages (Supalla/Webb 1995; also see chapter 35).
10. Trends in the field During the early years of sign language linguistics (1960 to the mid 1980s, roughly), much of the research focused on discovering and describing the fundamental structural components of sign languages. Most of the research during this period was conducted on ASL, and was primarily descriptive in nature, though in time researchers began to include theoretical discussions as well. Early works tended to stress the arbitrariness of signs and highlight the absence of iconicity as an organizing principle underlying sign languages; features that were markedly different from spoken languages were not often addressed. The early research revealed that sign languages are structured, acquired, and processed (at the psychological level) in ways that are quite similar to spoken languages. With advances in technology, researchers eventually discovered that largely identical mechanisms underlie the neurological processing of languages in the two modalities (see Emmorey (2002) for a review). Such discoveries served as proof that, contrary to what had been previously assumed, sign languages are legitimate human languages, worthy of linguistic analysis. In later years (mid 1980s to the late 1990s, roughly), once the linguistic status of sign languages was secure, researchers turned their attention to some of the more unusual aspects of sign languages, such as the complex use of space, the importance of
38. History of sign languages and sign language linguistics non-manual features, and the presence of both iconicity and gesture within sign languages (see, for example, Liddell 2003). It was during this time that sign language research expanded beyond the borders of the United States to include other (mostly European) sign languages. With an increase in the number and range of sign languages studied, typological properties of sign languages began to be considered, and the groundwork was laid for the eventual emergence of sign language typology. While the early research showed that signed and spoken languages share many fundamental properties, when larger numbers of sign languages were studied, it became clear that sign languages are remarkably similar in certain respects (for example, in the use of space in verbal and aspectual morphology). This observation led researchers to examine more seriously the effects that language modality might have on the overall structure of language. In recent years (late 1990s to the present), as the field of sign language typology has become established, research has been conducted on an even wider range of sign languages, crucially including non-Western sign languages. This has provided scholars the opportunity to reevaluate the assumption that sign languages show less structural variation than spoken languages do. While structural similarities between sign languages certainly exist (and they are, indeed, striking), systematic and comparative studies on a broader range of sign languages reveal some interesting variation (e.g. negation, plural marking, position of functional categories; see Perniss/Pfau/Steinbach 2007; Zeshan 2006, 2008). This line of inquiry has great potential to inform our understanding of typological variation as well as the universals of language and cognition. Cross-linguistic research on an increasing number of natural sign languages has been facilitated by the development of multimedia tools for the collection, annotation, and dissemination of primary sign language data. An early frontrunner in this area was SignStream, a database tool developed in the mid 1990s at Boston University’s ASL Linguistic Research Project (ASLLRP), under the direction of Carol Neidle. However, the most widely used current technology is ELAN (EUDICO Linguistic Annotator). Originally developed at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, ELAN is a language archiving technology that enables researchers to create complex annotations on video and audio resources. These tools make it possible to create large corpora of sign language digital video data, an essential step in the process of broad-scale linguistic investigations and typological comparisons (see Segouat/Braffort 2009). Sign language corpora projects are underway in Australia, Ireland, the Netherlands, the United Kingdom, France, Germany, the United States, and the Czech Republic, to name a few places. Embracing the tools of the day, there is even a sign language corpora wiki that serves as a resource for the emerging field of sign language corpus linguistics (http://sign.let.ru.nl/groups/slcwikigroup/). One of the most fascinating areas of recent research has been in the domain of emerging sign languages (see Meir et al. (2010) for an overview). A handful of researchers around the globe have been studying these new sign languages which emerge when deaf people without any previous exposure to language, either spoken or signed, come together and form a language community ⫺ be it in the context of villages with mixed deaf and hearing (see chapter 24, Shared Sign Languages) or newly formed deaf communities, as, for instance, the well-known case of a school for deaf children in Managua, Nicaragua (Kegl/Senghas/Coppola 1999; see chapter 36 for discussion).
937
938
VIII. Applied issues
11. Conclusion The discipline of sign language linguistics came into being 50 years ago, and the distance traveled in this short period of time has indeed been great. In terms of general perceptions, sign languages have gone from being considered primitive systems of gesture to being recognized for their richness and complexity, as well as their cultural and linguistic value. Though initially slow to catch on, the “discovery” of sign languages (or more precisely the realization that sign languages are full human languages) has been embraced by scholars around the globe. An informal survey of American introductory linguistics textbooks from the past several decades reveals a gradual though significant change in the perception of sign languages as natural human languages (see McBurney 2001). In textbooks from the mid 20th century, language was equated with speech (as per Hockett’s (1960) design features of language), and sign languages of deaf people were simply not mentioned. By the 1970s, a full decade after linguistic investigations began, sign languages began to be addressed, but only in a cursory manner; Bolinger’s Aspects of Language, 2nd Edition (1975) discusses sign language briefly in a section on language origins, noting that sign languages are “very nearly” as expressive a medium of communication as spoken languages. Fromkin and Rodman’s An Introduction to Language (1974) includes a discussion of “deaf sign” in a chapter on animal languages; although the discussion is brief, they do mention several significant aspects of sign languages (including syntactic and semantic structure), and they directly refute Hockett’s first design feature of language, arguing that sign languages are human languages, and therefore the use of the vocal-auditory channel is not a key property of human language. The 1978 edition of the text includes an entire section on ASL and the growing field of research surrounding it. Successive editions include increasingly extensive coverage, and whereas earlier editions covered sign languages in a separate section, starting with the 1998 edition, discussion of sign languages is integrated throughout the text, in sections on linguistic knowledge, language universals, phonology, morphology, syntax, language acquisition, and language processing. Although it has taken some time, the ideas initially proposed by Stokoe and further developed by sign linguists around the world have trickled down and become part of the standard discussion of human language. In addition to the continued and expanding professional conference and publishing activities specific to sign language linguistics, sign language research is crossing over into many related disciplines, with papers being published in a growing number of journals and conference proceedings. Over the past decade, sign language research has been presented at a wide range of academic conferences, and special sign language sessions or workshops have been held in conjunction with many professional conferences in related disciplines (including child language acquisition, bilingual acquisition, gesture studies, minority languages, endangered languages, sociolinguistics, language typology, laboratory phonology, corpus linguistics, computational linguistics, anthropology, psychology, and neuroscience). Without question, research into sign languages has enriched our understanding of the human mind and its capacity for language. Sign languages have proven to be a fruitful area of study, the findings of which shed light upon some of the most challenging and significant questions in linguistics and neighboring disciplines. One need only glance through this volume’s table of contents to get a sense of how broad and varied
38. History of sign languages and sign language linguistics the discipline of sign language linguistics has become; it is a testament to the compelling nature of the subject matter as well as to the dedication and excellence of the community of scholars who have made this their life’s work.
12. Literature Akach, Philomen A.O. 1992 Sentence Formation in Kenyan Sign Language. In: The East African Sign Language Seminar, Karen, Nairobi, Kenya, 24 th⫺28 th August 1992. Copenhagen: Danish Deaf Association, 45⫺51. Aldersson, Russell R. 2006 A Lexical Comparison of Icelandic Sign Language and Danish Sign Language. In: Birkbeck Studies in Applied Linguistics, Vol. 2. Anderson, Lloyd B. 1979 A Comparison of Some American, English, and Swedish Signs: Evidence on Historical Change in Signs and Some Family Relationships of Sign Languages. Manuscript, Gallaudet University, Washington, DC. Aristotle; Hammond, William Alexander 1902 Aristotle’s Psychology: A Treatise on the Principles of Life (DeAnima and Parva Naturalia). London: S. Sonnenschein & Co. Armstrong, David F./Karchmer, Michael A./Van Cleve, John V. (eds.) 2002 The Study of Signed Languages: Essays in Honor of William C. Stokoe. Washington, DC: Gallaudet University Press. Augustine; Oates, Whitney Jennings 1948 Basic Writings of St. Augustine. New York: Random House. Baynton, Douglas C. 1996 Forbidden Signs: American Culture and the Campaign Against Sign Language. Chicago: University of Chicago Press. Baynton, Douglas C. 2002 The Curious Death of Sign Language Studies in the Nineteenth Century. In: Armstrong, David F./Karchmer, Michael A./Van Cleve, John V. (eds.), The Study of Signed Languages: Essays in Honor of William C. Stokoe. Washington, DC: Gallaudet University Press, 13⫺34. Bébian, Auguste 1825 Mimographie: Essai d’écriture Mimique, Propre à Régulariser le Langage des Sourdsmuets. Paris: Colas. Bender, Ruth 1960 The Conquest of Deafness. Cleveland, OH: The Press of Western Reserve. Bergman, Brita 1982 Studies in Swedish Sign Language. Department of Linguistics, University of Stockholm. Bickford, J. Albert 2005 The Signed Languages of Eastern Europe. SIL Electronic Survey Reports 2005⫺026: 45. [Available at: www.sil.org/silesr/abstract.asp?ref=2005⫺026] Bloomfield, Leonard 1933 Language. New York: Holt, Rinehart & Winston. Bolinger, Dwight Le Merton 1975 Aspects of Language, 2nd edition. New York: Harcourt Brace Jovanovich.
939
940
VIII. Applied issues Boyes Braem, Penny 1984 Studying Swiss German Sign Language Dialects. In: Loncke, Filip/Boyes Braem, Penny/ Lebrun, Yvan (eds.), Recent Research on European Sign Languages: Proceedings of the European Meeting of Sign Language Research, Held in Brussels, September 19⫺25, 1982. Lisse: Swets & Zeitlinger, 93⫺103. Branson, Jan/Miller, Don 2002 Damned for Their Difference: The Cultural Construction of Deaf People as Disabled. Washington, DC: Gallaudet University Press. Brennan, Mary/Hayhurst, Allan B. 1980 The Renaissance of British Sign Language. In: Baker-Shenk, Charlotte/Battison, Robbin (eds.), Sign Language and the Deaf Community: Essays in Honor of William C. Stokoe. Silver Spring, MD: National Association of the Deaf, 233⫺244. Brien, David 1992 Dictionary of British Sign Language/English. London: Faber and Faber. Bulwer, John 1644 Chirologia: Or the Natural Language of the Hand. London: Harper. Bulwer, John 1648 Philocophus: Or the Deafe and Dumbe Man’s Friend. London: Humphrey Moseley. Chao, Chienmin/Chu, Hsihsiung/Liu, Chaochung 1988 Taiwan Ziran Shouyu [Taiwan Natural Sign Language]. Taipei: Deaf Sign Language Research Association. [Revised edition of Chienmin Chao (1981)] Chomsky, Noam 1957 Syntactic Structures. The Hague: Mouton. Chomsky, Noam 1965 Aspects of the Theory of Syntax. Cambridge, MA: MIT Press. Collins-Ahlgren, Marianne 1989 Aspects of New Zealand Sign. PhD Dissertation. Wellington, Victoria University of Wellington. Cuxac, Christian 1983 Le Langage des Sourds. Paris: Payot. Dalgarno, George 1680 Didascalocophus; or the Deaf and Dumb Man’s Tutor, to Which Is Added a Discourse of the Nature and Number of Double Consonants Both Which Tracts Being the First (for What the Author Knows) that Have Been Published Upon Either of the Subjects. Oxford: Printed at the Theater in Oxford. Danby, Herbert 1933 The Mishnah. Oxford: Clarendon Press. Daniels, Marilyn 1997 Benedictine Roots in the Development of Deaf Education: Listening with the Heart. Westport, CT: Bergin & Garvey. Dekesel, Kristiaan 1992 John Bulwer: The Founding Father of BSL Research, Part I. In: Signpost 5(4), 11⫺14. Dekesel, Kristiaan 1993 John Bulwer: The Founding Father of BSL Research, Part II. In: Signpost 6(1), 36⫺46. Desloges, Pierre 1779 Observations d’un Sourd et Muèt, sur un Cours Élémentaire d’éducation des Sourds et Muèts: Publié en 1779 par M. l’abbé Deschamps. A Amsterdam & se trouve a Paris: Chez B. Morin. Deuchar, Margaret 1978 Diglossia in British Sign Language. PhD Dissertation, Stanford University. Deuchar, Margaret 1987 Sign Languages as Creoles and Chomsky’s Notion of Universal Grammar. In: Modgil, Sohan/Modgil, Celia (eds.), Noam Chomsky: Consensus and Controversy. London: Falmer Press, 81⫺91.
38. History of sign languages and sign language linguistics Diderot, Denis/Meyer, Paul Hugo 1965 Lettre sur les Sourds et Muets. Genève: Droz. Digby, Kenlam 1644 Treatise on the Nature of Bodies. Blaizot: Paris. Dotter, Franz/Okorn, Ingeborg 2003 Austria’s Hidden Conflict: Hearing Culture Versus Deaf Culture. In: Monaghan, Leila F./Schmaling, Constanze/Nakamura, Karen/Turner, Graham H. (ed.), Many Ways to be Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet University Press, 49⫺66. Emmorey, Karen 2002 Language, Cognition, and the Brain: Insights from Sign Language Research. Mahwah, NJ: Lawrence Erlbaum. Engberg-Pedersen, Elisabeth/Hansen, Britta/Sørensen, Ruth Kjær 1981 Døves Tegnsprog: Træk af Dansk Tegnsprogs Grammatik. Arhus: Arkona. L’Epée, Abbé de 1776 Institution des Sourds et Muets, par la Voie des Signes Méthodiques [The Instruction of the Deaf and Dumb by Means of Methodical Signs]. Paris: Le Crozet. Eriksson, Per 1998 The History of Deaf People: A Source Book. Örebro: Daufr. Facchini, Massimo G. 1985 An Historical Reconstruction of Events Leading to the Congress of Milan in 1880. In: Stokoe, William C./Volterra, Virginia (eds.), SLR ’83: Proceedings of the 3 rd International Symposium on Sign Language Research, Rome, June 22⫺26, 1983. Silver Spring, MD: Linstok Press, 356⫺362. Fischer, Renate 1993 Language of Action. In: Fischer, Renate/Lane, Harlan (eds.), Looking Back: A Reader on the History of Deaf Communities and Their Sign Languages. Hamburg: Signum, 431⫺433. Fischer, Renate 1995 The Notation System of Sign Languages: Bébian’s Mimographie. In: Schermer, Trude/ Bos, Heleen (eds.), Sign Language Research 1994: Proceedings of the 4 th European Congress on Sign Language Research, Munich, September 1⫺3, 1994. Hamburg: Signum, 285⫺301. Fischer, Renate 2002 The Study of Natural Sign Language in Eighteenth-Century France. In: Sign Language Studies 2(4), 391⫺406. Fischer, Susan D. 1973 Two Processes of Reduplication in American Sign Language. In: Foundations of Language 9, 469⫺480. Fischer, Susan D. 1978 Sign Language and Creoles. In: Siple, Patricia (ed.), Understanding Language through Sign Language Research. New York: Academic Press, 309⫺331. Frishberg, Nancy 1975 Arbitrariness and Iconicity: Historical Change in American Sign Language. In: Language 51(3), 696⫺719. Frishberg, Nancy 1987 Home Sign. In: Van Cleve, John V. (ed.), Gallaudet Encyclopedia of Deaf People and Deafness, Vol. 3. New York: McGraw-Hill, 128⫺131. Fromkin, Victoria/Rodman, Robert 1974 An Introduction to Language. New York: Holt, Rinehart & Winston. Gardiner, Alan H. 1911 Egyptian Hieratic Texts, Transcribed, Translated and Annotated. In: Series I: Literary Texts of the New Kingdom. Hildesheim: Georg Olms.
941
942
VIII. Applied issues Goldin-Meadow, Susan 2003 The Resilience of Language: What Gesture Creation in Deaf Children Can Tell Us About How All Children Learn Language. New York: Psychology Press. Groce, Nora Ellen 1985 Everyone Here Spoke Sign Language: Hereditary Deafness on Martha’s Vineyard. Cambridge, MA: Harvard University Press. Hendriks, Bernadet 2008 Jordanian Sign Language: Aspects of Grammar from a Cross-linguistic Perspective. PhD Dissertation, University of Amsterdam. Utrecht: LOT. Hockett, Charles F. 1960 The Origin of Speech. In: Scientific American 203, 89⫺97. Hodgson, Kenneth W. 1954 The Deaf and Their Problems: A Study in Special Education. New York: Philosophical Library. Hong, Sung-Eun 2003 Empirical Survey of Animal Classifiers in Korean Sign Language (KSL). In: Sign Language & Linguistics 6(1), 77⫺99. Hong, Sung-Eun 2008 Eine Empirische Untersuchung zu Kongruenzverben in der Koreanischen Gebärdensprache. Hamburg: Signum. Johnston, Trevor 1989 Auslan: The Sign Language of the Australian Deaf Community. Vol. 1. PhD Dissertation, University of Sydney. Johnston, Trevor 2003 BSL, Auslan and NZSL: Three Signed Languages or One? In: Baker, Anne/Bogaerde, Beppie van den/Crasborn, Onno (eds.), Cross-linguistic Perspectives in Sign Language Research: Selected Papers from TISLR 2000. Hamburg: Signum, 47⫺69. Johnston, Trevor/Schembri, Adam 2007 Australian Sign Language (Auslan): An Introduction to Sign Language Linguistics. Cambridge: Cambridge University Press. Jouison, Paul 1990 Analysis and Linear Transcription of Sign Language Discourse. In: Prillwitz, Siegmund/ Vollhaber, Thomas (eds.), Current Trends in European Sign Language Research: Proceedings of the 3rd European Congress on Sign Language Research. Hamburg: Signum, 337⫺353. Kabatilo, Ziad Salah 1982 A Pilot Description of Indigenous Signs Used by Deaf Persons in Jordan. PhD Dissertation, Michigan State University. Kamei, Nobutaka 2006 History of Deaf People and Sign Languages in Africa: Fieldwork in the ‘Kingdom’ Derived from Andrew J. Foster. Tokyo: Akashi Shoten Co., Ltd. Kegl, Judy/Senghas, Ann/Coppola, Marie 1999 Creation through Contact: Sign Language Emergence and Sign Language Change in Nicaragua. In: DeGraff, Michel (ed.), Language Creation and Language Change: Creolization, Diachrony, and Development. Cambridge, MA: MIT Press, 179⫺237. Kendon, Adam 1989 Sign Languages of Aboriginal Australia: Cultural, Semiotic, and Communicative Perspectives. Cambridge: Cambridge University Press. Kendon, Adam 2002 Historical Observations on the Relationship Between Research on Sign Languages and Language Origins Theory. In: Armstrong, David F./Karchmer, Michael A./Van Cleve, John V. (eds.), The Study of Signed Languages: Essays in Honor of William C. Stokoe. Washington, DC: Gallaudet University Press, 35⫺52.
38. History of sign languages and sign language linguistics Klima, Edward S./Bellugi, Ursula 1979 The Signs of Language. Cambridge: Harvard University Press. Kyle, Jim (ed.) 1987 Sign and School: Using Signs in Deaf Children’s Development. Clevedon: Multilingual Matters. Kyle, James/Woll, Bencie 1985 Sign Language: The Study of Deaf People and Deafness. Cambridge: Cambridge University Press. Ladd, Paddy 2003 Understanding Deaf Culture: In Search of Deafhood. Clevedon: Multilingual Matters. Lane, Harlan L. 1984 When the Mind Hears: A History of the Deaf. New York: Random House. Lane, Harlan L. 1992 The Mask of Benevolence: Disabling the Deaf Community. New York: Knopf. Lane, Harlan L./Hoffmeister, Robert/Bahan, Benjamin J. 1996 A Journey Into the Deaf-World. San Diego, CA: DawnSignPress. Lewis, M. Paul (ed.) 2009 Ethnologue: Languages of the World, 16th edition. Dallas, TX: SIL International. [Online version: www.ethnologue.com] Liddell, Scott K. 1980 American Sign Language Syntax. The Hague: Mouton. Liddell, Scott K. 2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press. Lucas, Ceil/Valli, Clayton 1989 Language Contact in the American Deaf Community. In: Lucas, Ceil (ed.), The Sociolinguistics of the Deaf Community. San Diego, CA: Academic Press, 11⫺40. Lucas, Ceil/Valli, Clayton 1992 Language Contact in the American Deaf Community. San Diego, CA: Academic Press. Lupton, Linda/Salmons, Joe 1996 A Re-analysis of the Creole Status of American Sign Language. In: Sign Language Studies 90, 80⫺94. Massone, Maria Ignacia 1994 Lengua de Señas Argentina: Análisis y Vocabulario Bilingüe. Buenos Aires: Edicial. McBurney, Susan L. 2001 William Stokoe and the Discipline of Sign Language Linguistics. In: Historiographia Linguistica 28(1/2), 143⫺186. McKee, David/Kennedy, Graeme 2000 Lexical Comparison of Signs from American, Australian, British and New Zealand Sign Languages. In: Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 49⫺76. Meier, Richard P. 1984 Sign as Creole. In: Behavioral and Brain Sciences 7, 201⫺202. Meier, Richard P. 2002 Why Different, Why the Same? Explaining Effects and Non-effects of Modality Upon Linguistic Structure in Sign and Speech. In: Meier, Richard P./Cormier, Kearsy/QuintoPozos, David (eds.), Modality and Structure in Signed and Spoken Language. Cambridge: Cambridge University Press, 1⫺25. Meir, Irit/Sandler, Wendy 2007 A Language in Space: The Story of Israeli Sign Language. New York: Lawrence Erlbaum.
943
944
VIII. Applied issues Meir, Irit/Sandler, Wendy/Padden, Carol/Aronoff, Mark 2010 Emerging Sign Languages. In: Marschark, Marc/Spencer, Patricia (eds.), Oxford Handbook of Deaf Studies, Language, and Education, Vol. 2. Oxford: Oxford University Press, 267⫺280. Miles, M. 2000 Signing in the Seraglio: Mutes, Dwarfs and Jestures at the Ottoman Court 1500⫺1700. In: Disability and Society 15(1), 115⫺134. Miles, M. 2005 Deaf People Living and Communicating in African Histories, c. 960s⫺1960s [New, much extended Version 5.01, incorporating an article first published in Disability & Society 19, 531⫺45, 2004, titled then: Locating Deaf People, Gesture and Sign in African Histories, 1450s⫺1950s. Available online: www.independentliving.org/docs7/ miles2005a.html]. Miller, Christopher 2001 The Adaptation of Loan Words in Quebec Sign Language: Multiple Sources, Multiple Processes. In: Brentari, Diane (ed.), Foreign Vocabulary in Sign Languages: A Crosslinguistic Investigation of Word Formation. Mahwah, NJ: Lawrence Erlbaum, 139⫺173. Mitchell, Ross E./Karchmer, Michael A. 2004 Chasing the Mythical Ten Percent: Parental Hearing Status of Deaf and Hard of Hearing Students in the United States. In: Sign Language Studies 4(2), 138⫺163. Moores, Donald F. 1987 Educating the Deaf: Psychology, Principles, and Practices. Boston: Houghton Mifflin. Morgan, Michael W. 2004 Tracing the Family Tree: Tree-reconstruction of Two Sign Language Families. Poster Presented at 8th International Conference on Theoretical Issues in Sign Language Research, Barcelona, Spain. Morgan, Michael W. 2006 Interrogatives and Negatives in Japanese Sign Language. In: Zeshan, Ulrike (ed.), Interrogatives and Negatives in Signed Languages. Nijmegen: Ishara Press, 91⫺127. Newport, Elissa/Supalla, Ted 2000 Sign Language Research at the Millennium. In: Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 103⫺114. Nyst, Victoria 1999 Variation in Handshape in Uganda Sign Language. Diploma Thesis, Leiden University, The Netherlands. Nyst, Victoria 2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, University of Amsterdam. Utrecht: LOT. Okombo, Okoth/Akach, Philemon A.O. 1997 Language Convergence and Wave Phenomena in the Growth of a National Sign Language in Kenya. In: International Journal of the Sociology of Language 125, 131⫺144. Oviedo, Alejandro 2001 Apuntes Para una Gramática de la Lengua de Señas Colombiana. Santafé de Bogotá, Colombia: INSOR. Padden, Carol A. 1983 Interaction of Morphology and Syntax in American Sign Language. PhD Dissertation, University of California, San Diego. [Published in 1988 in Outstanding Dissertations in Linguistics, Series IV, New York: Garland] Padden, Carol/Humphries, Tom 1988 Deaf in America: Voices from a Culture. Cambridge, MA: Harvard University Press.
38. History of sign languages and sign language linguistics Penn, Claire/Reagan, Timothy 1994 The Properties of South African Sign Language: Lexical Diversity and Syntactic Unity. In: Sign Language Studies 85, 317⫺25. Perniss, Pamela/Pfau, Roland/Steinbach, Markus 2007 Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter. Pilleux, Mauricio/Cuevas, Hernán/Avalos, Erica 1991 El Lenguaje de Señas: Análisis Sintáctico-Sémantico. Valdiva: Universidad Austral de Chile. Pizzuto, Elena/Volterra, Virginia 1996 Sign Language Lexicon: Cross-linguistic and Cross-cultural Comparisons. Report prepared for the Commission of the European Communities, Human Capital and Mobility Programme; Project: Intersign: Multi Professional Study of Sign Language and the Deaf Community in Europe (Network). Plann, Susan 1993 Pedro Ponce de León: Myth and Reality. In: Van Cleve, John V. (ed.), Deaf History Unveiled: Interpretations from the New Scholarship. Washington, DC: Gallaudet University Press, 1⫺12. Plann, Susan 1997 A Silent Minority: Deaf Education in Spain 1550⫺1835. Berkeley, CA: University of California Press. Plato; Jowett, Benjamin 1931 The Dialogues of Plato. London: Oxford University Press. Prillwitz, Siegmund/Leven, Regina 1985 Skizzen zu einer Grammatik der Deutschen Gebärdensprache. Hamburg: Forschungsstelle Deutsche Gebärdensprache. Quadros, Ronice Müller de 1999 Phrase Structure of Brazilian Sign Language. PhD Dissertation, Pontifícia Universidade Católica do Rio Grande do Sul, Brazil. Quartararo, Anne 1993 Republicanism, Deaf Identity, and the Career of Henri Gaillard in Late-NineteenthCentury France. In: Van Cleve, John V. (ed.), Deaf History Unveiled: Interpretations from the New Scholarship. Washington, DC: Gallaudet University Press, 40⫺52. Radutzky, Elena 1993 The Education of Deaf People in Italy and the Use of Italian Sign Language. In: Van Cleve, John V. (ed.), Deaf History Unveiled: Interpretations from the New Scholarship. Washington, DC: Gallaudet University Press, 237⫺251. Rissanen, Terhi 1986 The Basic Structure of Finnish Sign Language. In: Tervoort, Bernard T. (ed.), Signs of Life: Proceedings of the Second European Congress on Sign Language Research, Amsterdam, July 14⫺18, 1985. Amsterdam: University of Amsterdam, 42⫺46. Sayers, Edna Edith/Gates, Diana 2008 Lydia Huntley Sigourney and the Beginnings of American Deaf Education in Hartford: It Takes a Village. In: Sign Language Studies 8(4), 369⫺411. Schermer, Trude 2003 From Variant to Standard: An Overview of the Standardization Process of the Lexicon of Sign Language of the Netherlands over Two Decades. In: Sign Language Studies 3(4), 469⫺486. Schlesinger, Izchak M./Namir, Lila 1976 Recent Research on Israeli Sign Language. In: Report on the 4 th International Conference on Deafness, Tel Aviv, March 18⫺23, 1973. Silver Spring, MD: National Association of the Deaf, 114.
945
946
VIII. Applied issues Schmaling, Constanze 2000 Maganar Hannu: Language of the Hands. A Descriptive Analysis of Hausa Sign Language. Hamburg: Signum. Schröder, Odd-Inge 1983 Fonologien i Norsk Tegnsprog. In: Tellevik, Jon M./Vogt-Svendsen, Marit/Schröder, Odd-Inge (eds.), Tegnspråk og Undervisning av Døve Barn: Nordisk Seminar, Trondheim, Juni 1982. Trondheim: Tapir, 39⫺53. Segouat, Jérémie/Braffort, Annelies 2009 Toward Categorization of Sign Language Corpora. In: Proceedings of the 2nd Workshop on Building and Using Comparable Corpora. Suntec, Singapore, August 2009, 64⫺67. Seigel, J.P. 1969 The Enlightenment and the Evolution of a Language of Signs in France and England. In: Journal of the History of Ideas 30(1), 96⫺115. Smith, Wayne H. 1989 The Morphological Characteristics of Verbs in Taiwan Sign Language. PhD Dissertation, Indiana University, Bloomington. Smith, Wayne H. 2005 Taiwan Sign Language Research: An Historical Overview. In: Language and Linguistics 6(2), 187⫺215. Stokoe, William C. 1960 Sign Language Structure: An Outline of the Visual Communication System of the American Deaf. In: Studies in Linguistics Occasional Papers 8. Buffalo: University of Buffalo Press. [Re-issued 2005, Journal of Deaf Studies and Deaf Education 10(1), 3⫺ 37] Stokoe, William C. 1972 Semiotics and Human Sign Languages. The Hague: Mouton. Stokoe, William C. 1979 Language and the Deaf Experience. In: Alatis, J./Tucker, G.R. (eds.), Proceedings from the 30 th Annual Georgetown University Round Table on Languages and Linguistics, 222⫺230. Stokoe, William C./Casterline, Dorothy/Croneberg, Carl 1965 A Dictionary of American Sign Language on Linguistic Principles. Washington, DC: Gallaudet College Press. Stone, Christopher/Woll, Bencie 2008 Dumb O Jemmy and Others: Deaf People, Interpreters and the London Courts in the Eighteenth and Nineteenth Centuries. In: Sign Language Studies 8(3), 226⫺240. Supalla, Ted 1982 Structure and Acquisition of Verbs of Motion and Location in American Sign Language. PhD Dissertation, University of California, San Diego. Supalla, Ted/Newport, Elissa L. 1978 How Many Seats in a Chair? The Derivation of Nouns and Verbs in American Sign Language. In: Siple, Patricia (ed.), Understanding Language through Sign Language Research. New York: Academic Press, 181⫺214. Supalla, Ted/Webb, Rebecca 1995 The Grammar of International Sign: A New Look at Pidgin Languages. In: Emmorey, Karen/Reilly, Judy S. (eds.), Language, Gesture, and Space. Hillsdale, NJ: Lawrence Erlbaum, 333⫺352. Takashi, Tanokami/Peng, Fred C. 1976 Shuwa o Megutte: On the Nature of Sign Language. Hiroshima, Japan: Bunka Hyoron Shuppan. Tang, Gladys 2007 Hong Kong Sign Language: A Trilingual Dictionary with Linguistic Descriptions. Hong Kong: Chinese University Press.
38. History of sign languages and sign language linguistics Tervoort, Bernard 1954 Structurele Analyse van Visueel Taalgebruik Binnen een Groep Dove Kinderen [Structural Analysis of Visual Language Use in a Group of Deaf Children]. Amsterdam: Noord-Hollandsche Uitgevers Maatschappij. Tervoort, Bernard 1961 Esoteric Symbolism in the Communication Behavior of Young Deaf Children. In: American Annals of the Deaf 106, 436⫺480. Tervoort, Bernard 1994 Sign Languages in Europe: History and Research. In: Asher, Ronald E./Simpson, J. M. Y. (eds.), The Encyclopedia of Language and Linguistics. Oxford: Pergamon Press, 3923⫺3926. Trager, George L./Smith, Henry Lee 1951 An Outline of English Structure (Studies in Linguistics: Occasional Papers 3). Norman, OK: Battenberg Press. Van Cleve, John V./Crouch, Barry A. 1989 A Place of Their Own: Creating the Deaf Community in America. Washington, DC: Gallaudet University Press. Vasishta, Madan/Woodward, James/Wilson, Kirk L. 1978 Sign Languages in India: Regional Variations Within the Deaf Populations. In: Indian Journal of Applied Linguistics 4(2), 66⫺74. Vogt-Svendsen, Marit 1983 Lip Movements in Norwegian Sign Language. In: Kyle, James/Woll, Bencie (eds.), Language in Sign: An International Perspective on Sign Language. London: Croom Helm, 85⫺96. Volterra, Virginia (ed.) 1987 La Lingua Italiana dei Segni: La Comunicazione Visivo Gestuale dei Sordi. Bologna: Il Mulino. West, LaMont 1963 A Terminal Report Outlining the Research Problem, Procedure of Investigation and Results to Date in the Study of Australian Aboriginal Sign Language. Sydney. AIATSIS Call number: MS 2456/1 (Item 3). Winzer, Margret A. 1987 Canada. In: Van Cleve, John V. (ed.), Gallaudet Encyclopedia of Deaf People and Deafness. Vol. 1. A⫺G. New York, NY: McGraw-Hill, 164⫺168. Wittmann, Henri 1991 Classification Linguistique des Langues Signées Nonvocalement. In: Revue Québécoise de Linguistique Théorique et Appliquée: Les Langues Signées 10(1), 215⫺288. Woll, Bencie 1987 Historical and Comparative Aspects of BSL. In: Kyle, Jim (ed.), Sign and School: Using Signs in Deaf Children’s Development. Clevedon: Multilingual Matters, 12⫺34. Woll, Bencie 2003 Modality, Universality and the Similarities Among Sign Languages: An Historical Perspective. In: Baker, Anne/Bogaerde, Beppie van den/Crasborn, Onno (eds.), Crosslinguistic Perspectives in Sign Language Research: Selected Papers from TISLR 2000. Hamburg: Signum, 17⫺30. Woll, Bencie 2006 Sign Language: History. In: Brown, Keith (ed.), The Encyclopedia of Language and Linguistics. Amsterdam: Elsevier, 307⫺310. Woll, Bencie/Sutton-Spence, Rachel/Elton, Frances 2001 Multilingualism: The Global Approach to Sign Languages. In: Lucas, Ceil (ed.), The Sociolinguistics of Sign Languages. Cambridge: Cambridge University Press, 8⫺32.
947
948
VIII. Applied issues Woodward, James 1973 Implicational Lects on the Deaf Diglossic Continuum. PhD Dissertation, Georgetown University, Washington, DC. Woodward, James 1976 Signs of Change: Historical Variation in ASL. In: Sign Language Studies 5(10), 81⫺94. Woodward, James 1978a Historical Basis of ASL. In: Siple, Patricia (ed.), Understanding Language through Sign Language Research. New York, NY: Academic Press, 333⫺348. Woodward, James 1978b All in the Family: Kinship Lexicalization Across Sign Languages. In: Sign Language Studies 19, 121⫺138. Woodward, James 1991 Sign Language Varieties in Costa Rica. In: Sign Language Studies 73, 329⫺346. Woodward, James 1993 Lexical Evidence for the Existence of South Asian and East Asian Sign Language Families. In: Journal of Asian Pacific Communication 4(2), 91⫺106. Woodward, James 1996 Modern Standard Thai Sign Language: Influence from ASL, and Its Relationship to Original Thai Sign Varieties. In: Sign Language Studies 92, 227⫺252. Woodward, James 2000 Sign Language and Sign Language Families in Thailand and Viet Nam. In: Emmorey, Karen/Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 23⫺47. Yau, Shun-Chiu 1991 La Langue des Signes Chinoise. In: Cahiers de Linguistique Asie Orientale 20(1), 138⫺142. Zeshan, Ulrike 2000 Sign Language in Indo-Pakistan: A Description of a Signed Language. Amsterdam: Benjamins. Zeshan, Ulrike (ed.) 2006 Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press. Zeshan, Ulrike 2008 Roots, Leaves and Branches: The Typology of Sign Languages. In: Quadros, Ronice M. de (ed.), Sign Languages: Spinning and Unraveling the Past, Present and Future. Proceedings of the 9 th International Conference on Theoretical Issues in Sign Language Research, Florianopolis, Brazil, December 2006. Petropolis, Brazil: Editora Arara Azul, 671⫺695.
Susan McBurney, Spokane, Washington (USA)
39. Deaf education and bilingualism
39. Deaf education and bilingualism 1. 2. 3. 4. 5. 6. 7.
Introduction Early records of deaf education Bimodal bilingualism at the societal level Deaf education in the 21st century Bilinguals Conclusion Literature
Abstract In this chapter, the major findings from research on deaf education and bilingualism are reviewed. Following a short introduction into (sign) bilingualism, the second section provides an overview of the history of deaf education from the earliest records until the late 19 th century, highlighting the main changes in philosophy and methods at the levels of provision and orientation. In section 3, the major factors that have determined the path toward sign bilingualism in the deaf communities, in particular, at the levels of language policy and education, are discussed. Current developments and challenges in deaf education, as reflected in the recent diversification of education methods, are addressed in section 4, with a focus on bilingual education conceptions. The final section is centred on deaf bilinguals, their language development, and patterns of language use, including cross-modal contact phenomena in educational and other sociolinguistic contexts.
1. Introduction Bilingualism is not the exception, but rather the norm for the greater part of the world population (Baker 2001; Grosjean 1982; Romaine 1996; Tracy/Gawlitzek-Maiwald 2000). Maintenance and promotion of bilingualism at the societal level are related to the status of the languages involved (majority or prestige language vs. minority language). Indeed, while social and economic advantages are attributed to the ability to use various ‘prestige’ languages, minority bilingualism is generally associated with low academic achievements and social problems. This apparent paradox reflects the symbolic value of language and the continuing predominance of the nation-state ideology in the majority of Western countries, in which language is regarded as one of the most powerful guarantors for social cohesion in a state, language policies being commonly monolingual in orientation. The situation is markedly different in countries with a longstanding tradition of multilingualism, as is the case in India (Mohanty 2006). The factors that determine the vitality of two or more languages in a given social context may change over time; so may the patterns of language use in a given speech community, indicating that bilingualism is a dynamic phenomenon. At the level of language users, the different types of bilingualism encountered are commonly de-
949
950
VIII. Applied issues scribed in terms of a continuum ranging from balanced bilingualism to partial or semibilingualism (Romaine 1996). In view of the variety of acquisition types and competence levels attained, some authors have proposed to define bilingualism as the regular use of more than one language in everyday life (Grosjean 1982). Following this broad definition of bilingualism, most members of deaf communities are bilingual, as they regularly use the community’s sign language and the spoken or written language of the larger hearing society (Ann 2001). Since this type of bilingualism involves two languages of different modalities of expression, it is commonly referred to as bimodal bilingualism, sign bilingualism, or cross-modal bilingualism. In addition, many deaf individuals know and use other sign languages ⫺ or spoken/written languages (for discussion of additional dimensions of bilingualism in deaf communities, see chapter 35, Language Contact and Borrowing). Linguistic competences in sign language and spoken language can vary substantially (Grosjean 2008; Lucas/Valli 1992; Padden 1998). Linguistic profiles range from native fluency in one or both languages to delayed, partial, or even only rudimentary skills. The reasons for this variation relate to such diverse factors as the age at which hearing loss occurred, the degree of deafness, the age of exposure to the respective languages, the hearing status of the parents and their family language policy, schooling, and social networks (Emmorey 2002; Fischer 1998; Grosjean 2008; van den Bogaerde/Baker 2002). In recent years, sociolinguistic and psycholinguistic research has shown that sign bilingualism is as dynamic as other types of bilingualism, and that bilingual signers, like other bilinguals, skilfully exploit their linguistic resources. It is important to note in this context that deaf individuals have only recently been recognised as bilingual language users (Grosjean 2008; Plaza-Pust/Morales-López 2008; Padden 1998) following the gradual recognition of sign languages as bona fide languages from the 1960s onwards. Since then, deaf activism worldwide has led to a wider perception of deaf communities as linguistic minority communities, the official recognition of sign languages and their inclusion in the education of deaf children being among their central demands. However, questions concerning the use of sign languages and spoken/written languages in the education of deaf individuals, and the impact of signing on the development of spoken language have preoccupied professionals and scholars for the last two centuries (Bagga-Gupta 2004; Tellings 1995). Beyond the controversy over the most appropriate educational methods, the establishment of deaf schools has been of critical importance in the development of deaf communities and their sign languages (Erting/ Kuntze 2008; Ladd 2003; Monaghan 2003; Padden 1998), the emergence of Nicaraguan Sign Language being a recent example (Senghas 2003; see chapter 36 for discussion). Education also plays a prominent role in relation to bilingualism in other minority groups. However, bilingual development of sign language and spoken/written language in deaf children is determined by two unusual factors, namely, (i) the unequal status of the languages at the level of parent-child transmission (more than 90 % of deaf children are born to hearing, non-signing parents) and (ii) the unequal accessibility of the languages (no or only limited access to auditory input). The response of the educational (and political) institutions to the linguistic needs of deaf students is a major theme in bimodal bilingualism research. The diversity of approaches to communication with deaf children can be described as a continuum that ranges from a strictly monolingual (oralist) to a (sign) bilingual model of deaf education, with variation in concepts
39. Deaf education and bilingualism of bilingual education also reflecting different objectives in language planning in relation to sign languages. Another major topic in the field concerns the interaction of the two languages in their acquisition and use, with no consensus in the domain of deaf education on the role of sign language in the acquisition of literacy (Chamberlain/ Mayberry 2000).
2. Early records of deaf education Until fairly recently, little was known about sign languages, the language behaviour of their users, and the status of these languages in education and society at large. Studies on the early records of deaf individuals’ use of signs to communicate and the first attempts to educate deaf children (Lang 2003) report that manual means of communication ⫺ where they were noted ⫺ were not referred to as ‘language’ on a par with spoken languages, and deaf individuals were not regarded as bilinguals. However, questions about the ‘universal’ nature of gesture/signing (Woll 2003), and the use of manual means of communication (in particular, manual alphabets) have been addressed since the beginnings of deaf education (for the development of deaf education, see also chapter 38, History of Sign Languages and Sign Language Linguistics). The first records of deaf education date from the 16th century. Deaf children from aristocratic families were taught individually by private tutors (often members of religious congregations, such as monks). Spoken language was taught to these children with two main objectives: legal (i.e. to enable them to inherit) and religious. The earliest documents report on the teachers’ successes rather than describe the methods used (Gascón-Ricao/Storch de Gracia y Asensio 2004; Monaghan 2003). As a result, little is known about the methods used around 1545 by Brother Ponce de León, a Spanish Benedictine monk, commonly regarded as the first teacher of the deaf. There are, however, some indications that he used a manual alphabet with his pupils, a practice that spread to several European countries following the publication of the first book about deaf education by Juan de Pablo Bonet in 1620. Juan de Pablo Bonet acknowledges signing as the natural means of communication among the deaf, but advises against its use in the education of deaf children, reflecting his main educational aim: the teaching of the spoken language. Publications on deaf education which mention the use of signs to support the teaching of the spoken/written language appeared soon after in Britain and France (Gascón-Ricao/Storch de Gracia y Asensio 2004; Woll 2003; Tellings 1995). Classes for deaf children were established more than a century later, in the 1760s, at Thomas Braidwood’s private academy in Edinburgh, and at the French National Institute for Deaf-Mutes in Paris, founded by the Abbé de l’Epée. The latter was the first public school for deaf children, including not only children of wealthy families but also charity pupils. Soon after, schools for deaf children were founded in other centres across Europe (for example, in Leipzig in 1778, in Vienna in 1779, and in Madrid in 1795) (Monaghan 2003). At that time, the state and religious groups (often the Catholic Church) were the major stakeholders in the education of deaf children. Priests, nuns, and monks founded schools in other countries throughout the world. In some cases, deaf and hearing teachers who had worked in schools for the deaf in Europe went on
951
952
VIII. Applied issues to establish educational institutions abroad. For example, the first school for the deaf in Brazil was founded in Rio de Janeiro in 1857 by Huet, a deaf teacher from Paris (Berenz 2003). By the end of the 19th century, education had reached many deaf children; however, education for deaf children did not become compulsory in most countries until much later, in many countries only in the second half of the 20th century (see various chapters in Monaghan et al. 2003). Developments in deaf education from the late 18th century to the end of the 19th century were also crucial in relation to policies about educational objectives and communication. While the goal of teaching deaf children the spoken/written language was shared, the means to achieve this goal became a matter of a heated debate that continues to divide the field today (Gascón-Ricao/Storch de Gracia y Asensio 2004; Lane/ Hoffmeister/Bahan 1996; Tellings 1995). De l’Epée, the founder of the Paris school, believed that sign language was the natural language of deaf individuals. In his school, deaf pupils were taught written language by means of a signed system (‘methodical signs’) which he had developed, comprised of the signs used by deaf people in Paris and additional signs invented to convey the grammatical features of French. The impact of his teaching went well beyond Paris, as several other schools that adopted this method were established in France and a teacher trained in this tradition, Laurent Clerc, established the American Asylum for the Deaf in Hartford (Connecticut) in 1817 together with Thomas Gallaudet. Teachers trained in this institution later established other schools for deaf children using the same approach throughout the US (Lane/Hoffmeister/Bahan 1996). The spread of this philosophy, that promoted the use of signs, even though it included artificial signs, recognised the value of sign language for communication with deaf children, and the role of sign language in teaching written language, was challenged by the increasing influence of those who argued in favour of the oralist approach. Oralism regarded spoken language as essential for a child’s cognitive development and for full integration into society, restricted communication to speech and lipreading, and regarded written language learning as secondary to the mastery of the spoken language. One of the most influential advocates of the oral method in deaf education and for its spread in Germany and among allied countries was Samuel Heinicke, a private tutor who founded the first school for the deaf in Germany in 1778. The year 1880 is identified as a turning point in the history of deaf education. During the International Congress on the Education of the Deaf held in Milan in that year, a resolution was adopted in which the use of signs in the education of deaf children was rejected, and the superiority of the oral method affirmed (Gascón-Ricao/ Storch de Gracia y Asensio 2004; Lane/Hoffmeister/Bahan 1996). The impact of this congress, attended by hearing professionals from only a few countries, must be understood in relation to more general social and political developments towards the end of the 19th century (Monaghan 2003; Ladd 2003). Over the following years, most schools switched to an oralist educational policy, so that by the early 20th century, oralism was dominant in deaf education. While the Milan congress was a major setback for the role of sign language in deaf education, sign languages continued to be used in deaf communities. Indeed, although oralist in educational orientation, residential schools for the deaf continued to contribute to the development and maintenance of many sign languages throughout the fol-
39. Deaf education and bilingualism lowing decades as sign languages were transmitted from one generation to another through communication among the children outside the classroom. These institutions can therefore be regarded as important sites of language contact (Lucas/Valli 1992), and by extension, of sign bilingualism, even though deaf people were not specifically aware of their bilinguality at the time. It is important to emphasise that the Milan resolution did not have an immediate effect in all countries (Monaghan et al. 2003). In some, the shift towards oralism only occurred decades later, as was the case in Ireland, where signing continued to be used in schools until well into the 1940s (LeMaster 2003). In other countries, for example, the US, sign language retained a role in deaf education in some schools. In China, oralism was introduced only in the 1950s based on reports about the use of this method in Russia (Yang 2008).
3. Bimodal bilingualism at the societal level In the course of the last three decades, administrations in several countries have been confronted with questions concerning language planning measures targeting sign languages, such as their legal recognition, their inclusion in deaf children’s education, or the provision of interpretation. Grassroots pressure by deaf associations and related interest groups has been instrumental in getting these issues to appear on the agendas of governments throughout the world. Importantly, much of the impetus for local activities has been driven by international deaf activism, based on concepts such as Deaf community and Deaf culture that are linked to the notion of sign language as a symbol of identity (Ladd 2003). Indeed, hearing loss is not the sole determiner of Deaf community membership, as this is crucially determined by the choice of sign language as the preferred language (Woll/Ladd 2003) and solidarity, based on the concept of attitudinal deafness (Ladd 2003; Erting/Kuntze 2008).
3.1. The Deaf community as a linguistic minority group The development of a socio-cultural (or socio-anthropological) view of deafness and related demands for the legal recognition of sign languages and their users as members of linguistic minorities throughout the world are examples of the internationalisation of political activism (recently referred to in terms of a ‘globalization of Deafhood’, Erting/Kuntze 2008), which is reflected in similar sociolinguistic changes that have affected deaf communities in several countries (Monaghan et al. 2003). Historically, the gradual self-assertion of deaf individuals as members of a linguistic minority as of the late 20th century is tied to the insights obtained from linguistic research on sign languages, on the one hand, and socio-political developments toward the empowerment of linguistic minorities, on the other hand. Following the recognition of sign languages as full languages in the 1960s, deaf people themselves felt empowered to claim their rights as a linguistic minority group, on a par with other linguistic minority groups that were granted linguistic rights at the time (Morales-López 2008). The official recognition of sign languages as well as their inclusion in the education of
953
954
VIII. Applied issues deaf children are central demands. In Sweden, where the provision of home-language teaching to minority and immigrant children was stipulated by the 1977 ‘home language reform’ act (Bagga-Gupta/Domfors 2003), sign language was recognised in 1981 as the first and natural language of deaf individuals. The work of Swedish sign language researchers (inspired by Stokoe’s research into ASL), Deaf community members, and NGOs brought about this change at the level of language policy that would soon be reflected in the compulsory use of sign language as the language of instruction at schools with deaf children. In the US, the Deaf President Now movement, organised by Gallaudet University students in March 1988, and leading to the appointment of the first deaf president of that university, did not only raise awareness of the Deaf community in the hearing society, it was also “above all a reaffirmation of Deaf culture, and it brought about the first worldwide celebration of that culture, a congress called The Deaf Way, held in Washington, DC, the following year”, with thousands of deaf participants from all over the world (Lane/Hoffmeister/Bahan 1996, 130). These two events gave impetus to the Deaf movement, which has influenced political activism of deaf communities in many countries. Sociolinguistic research into deaf communities around the globe has provided further insights into how developments at a broad international level can combine with local sociolinguistic phenomena in some countries, while they may have little impact on the situation of deaf people in others. Positive examples include Spain and South Africa. In Spain, political activism of deaf groups throughout the country began in the 1990s, influenced by the worldwide Deaf movement (Gras 2008) and by the sociopolitical changes in that country concerning the linguistic rights granted to regional language minorities after the restoration of democracy in the late 1970s (MoralesLópez 2008). A similar relationship between political reforms and the activities of local deaf communities is reported by Aarons and Reynolds (2003) for South Africa, where the recognition of South African Sign Language (SASL) was put on the political agenda after the end of the apartheid regime, with the effect that the 1996 constitution protects the rights of deaf people, including the use of SASL. In contrast, in many other African countries, socio-cultural and economic circumstances (widespread poverty, lack of universal primary education, negative attitudes towards deafness) work against the building of deaf communities (Kiyaga/Moores 2003). In some cases, Deaf communities in developing countries have been influenced by Deaf communities from other countries. One example is discussed in Senghas (2003) regarding the assistance provided by the Swedish Deaf community to the Nicaraguan Deaf community in the process of its formation and organisation, through exchanges between members of the Nicaraguan and the Swedish Deaf communities; the Swedish Deaf community also funded the centre for Deaf activities in Managua. Despite the differences in the timing of the ‘awakening’ of the deaf communities in different countries, the developments sketched here point to the significance of the “process by which Deaf individuals come to actualise their Deaf identity” (Ladd 2003, xviii) and the nature of Deaf people’s relationships to each other ⫺ two dimensions that are captured by the concept of Deafhood, developed by Paddy Ladd in the 1990s (Ladd 2003). Padden and Humphries (2005, 157) highlight the sense of pride brought about by the recognition of sign languages as full languages: “To possess a language that is not quite like other languages, yet equal to them, is a powerful realization for a group of people who have long felt their language disrespected and besieged by others’ attempts to eliminate it”.
39. Deaf education and bilingualism Indeed, signers’ reports on their own language socialisation and their lack of awareness that they were bilingual (Kuntze 1998) are an indication of the effect of oralism on the identities of deaf individuals. It should be noted in this context that hearing people with deaf parents shared the experiences of their families as “members of two cultures (Deaf and hearing), yet fully accepted by neither” (Ladd 2003, 157).
3.2. Language planning Until the end of the 20th century, language planning and language policies negatively impacted on bilingualism in Deaf communities. The situation has changed as measures have been taken in some countries that specifically target sign languages and their users (see chapter 37, Language Planning, for further discussion). There has, however, been concern about whether the steps taken meet the linguistic and educational needs of deaf individuals (Cokely 2005; Gras 2008; Morales-López 2008; Reagan 2001). Abstracting away from local problems, studies conducted in various social contexts reveal similar shortcomings in three major areas: (i) sign language standardisation, (ii) sign language interpretation, and (iii) education. Among the most controversial language planning measures are those that affect the development of languages. Sign languages have been typically used in informal contexts, with a high degree of regional variation. With the professionalization of interpreting, increased provision of interpreting in schools and other contexts, and the teaching of sign languages to deaf and hearing learners, these features have led to a demand for the development of new terminology and more formal registers. Standardisation deriving from expansion of language function is often contentious because it affects everyday communication in multiple ways. Communication problems may arise (e.g. between sign language interpreters and their consumers) and educational materials may either not be used (as is the case with many sign language dictionaries, see Johnston 2003; Yang 2008) or be used in unforeseen ways (Gras 2008). Changes at the legal level concerning recognition of sign language do not always have the expected effects. In France, for example, the 1991 Act granted parents of deaf children free choice with respect to the language used in the education of their children, but did not stipulate that any concrete measures be taken, neither with respect to provisions to meet the needs of those choosing this option nor with respect to the organisation of bilingual teaching where it was being offered (Mugnier 2006). Aarons and Reynolds (2003) describe a similar situation for South Africa regarding the 1996 South African Schools Act, which stipulated that SASL be used as the language of instruction. In general, scholars agree that many of the shortcomings encountered are related to the lack of a holistic approach in sign language planning that would be characterised by coordinated action and involvement (Gras 2008; Morales-López 2008). Indeed, in many social contexts, measures taken represent political ‘concessions’ to pressure groups (deaf associations, educational professionals, parents of deaf children), often made with little understanding of the requisites and effects of the steps taken. The question of whether and how diverse and often conflicting objectives in the area of deaf education are reconciled is addressed in the next section.
955
956
VIII. Applied issues
4. Deaf education in the 21st century 4.1. Diversification of educational methods Moores and Martin (2006) identify three traditional concerns of deaf educators: (i) Where should deaf children be taught? (ii) How should they be taught? (iii) What should they be taught? From the establishment of the first schools for the deaf in the 18th century until today, the different answers to these questions are reflected in the diversity of educational methods, including the bilingual approach to deaf education. Developments leading to a diversification of educational options in many countries are similar, reflecting, on the one hand, the impact of the international Deaf movement and related demands for sign bilingual education, and, on the other hand, the more general trend toward inclusive education. However, variation in educational systems indicates socio-political and cultural characteristics unique to different countries. It is important to note in this context that throughout the world, many deaf children continue to have no access to education (Kiyaga/Moores 2003). Indeed, it has been estimated that only 20 % of all deaf children worldwide have the opportunity to go to school (Ladd 2003). In many developing countries, universal primary education is not yet available; because resources are limited, efforts are concentrated on the provision of general education. Deaf education, where it is available, is often provided by nongovernmental organisations (Kiyaga/Moores 2003). Comparison of the different educational options available shows that provision of deaf education varies along the same dimensions as those identified for other types of bilingual education (Baker 2007): (a) status of the languages (minority vs. majority language), (b) language competence(s) envisaged (full bilingualism or proficiency in the majority language), (c) placement (segregation vs. mainstreaming), (d) language backgrounds of children enrolled, and (e) allocation of the languages in the curriculum. From a linguistic perspective, the spectrum of communication approaches used with deaf children exists on a continuum that ranges from a strictly monolingual (oralist) to a spoken/written language and sign language bilingual model of deaf education, with intermediate options characterised either by the use of signs as a supportive means of communication or by teaching of sign language as a second language (Plaza-Pust 2004). Variation in the status assigned to sign language in deaf education bears many similarities to the situation of other minorities, but there are also marked differences relating to the difference in the accessibility of the minority vs. the majority language to deaf children and the types of intervention provided using each language. This is also reflected in the terminological confusion that continues to be widespread in the area of deaf education related to the use of the term ‘signs’ or ‘signing’ to refer to any type of manual communication, without a clear distinction between the use of individual signs, artificially created signed systems, or natural sign languages. Only the latter are fully developed, independent language systems, acquired naturally by deaf children of deaf parents. The first alternative approaches to the strictly oralist method were adopted in the US in the 1970s as a response to the low linguistic and academic achievements of deaf children educated orally (Chamberlain/Mayberry 2000). Against the backdrop of strict oralism, the inclusion of signs to improve communication in the classroom marked an
39. Deaf education and bilingualism important step. However, the objective of what is commonly referred to as the Total Communication or Simultaneous Communication approaches in deaf education was still mastery of the spoken language. For this purpose, artificial systems were developed, consisting of sign language elements and newly created signs (for example, Seeing Essential English (SEE-1; Anthony 1971) or Signing Exact English (SEE-2; Gustason/ Zawolkow 1980) in the US). The use of these artificial systems to teach spoken language combined with the relative ease of hearing teachers in their ‘mastery’ (since only the lexicon had to be learned, rather than a different grammar) contributed to their rapid spread in many countries including the US, Australia, New Zealand, Switzerland, Germany, Thailand, and Taiwan (Monaghan 2003). It is important to note that the creation and use of these systems for didactic purposes represents a case of language planning with an assimilatory orientation (Reagan 2001). From a psycholinguistic perspective, these systems do not constitute a proper basis for the development of the deaf child’s language faculty as they do not represent independent linguistic systems (Johnson/Liddell/Erting 1989; Lane/Hoffmeister/Bahan 1996; Fischer 1998; Bavelier/ Newport/Supalla 2003). Moreover, adult models (hearing teachers and parents) commonly do not use them in a consistent manner, for example, frequently dropping functional elements (Kuntze 2008). It is clear, therefore, that what is often described in the educational area as the use of ‘sign language’ as a supportive means of communication needs to be distinguished from the use of sign language as a language of instruction in sign bilingual education programmes. Only in the latter case is the sign language of the surrounding Deaf community, a natural language, used as the language of instruction. In the second half of the 1980s, there was increasing awareness that Total Communication programmes were not delivering the results expected, particularly in relation to literacy. Against the backdrop of the cultural movement of the Deaf community, the Total Communication philosophy also clashed with the view of a Deaf community as a linguistic minority group in that it was based on a medical view of deafness. That signed systems were artificial modes of communication and not natural languages was also reflected in studies documenting children’s adaptations of their signing to better conform to the constraints of natural sign languages, for example, with respect to the use of spatial grammar (Kuntze 2008). In addition, there was consistent evidence of better academic results for deaf children of deaf parents (DCDP), which contributed to an understanding of the linguistic and educational needs of the children, in particular, the relevance of access to a natural language early in childhood (Tellings 1995; Hoffmeister 2000; Johnson/Liddell/Erting 1989).
4.2. Bilingual education In recognising the relevance of sign language for the linguistic and cognitive development of deaf children, the bilingual/bicultural approach to deaf education marked a new phase in the history of deaf pedagogy (Johnson/Liddell/Erting 1989). Sweden pioneered this phase by recognising sign language as the first language of deaf people in 1981, and implementing in 1983 the Special School curriculum, with the aim of promoting bilingualism (Bagga-Gupta/Domfors 2003; Svartholm 2007). The policy has resulted in the implementation of a uniform bilingual education option (Bagga-Gupta/
957
958
VIII. Applied issues Domfors 2003), which contrasts markedly with the range of options offered in other countries as of the 1990s. There is no comprehensive systematic comparison of bilingual education programmes internationally. However, some scholars have addressed the issue of variation in the bilingual conceptions applied.
4.2.1. Sign language promotion There are two main tenets of bilingual deaf education: (i) sign language is the primary language of deaf children in terms of accessibility and development; (ii) sign language is to be promoted as a ‘first’ language. How these tenets are operationalised varies in relation to the choice of the languages of instruction, the educational setting, and the provision of early intervention measures focussing on the development of a firm competence in sign language as a prerequisite for subsequent education. The majority of deaf children are born to non-signing hearing parents, so support for early intervention is particularly important given the relevance of natural language input during the sensitive period for language acquisition (Bavelier/Newport/Supalla 2003; Fischer 1998; Grosjean 2008; Leuninger 2000; Mahshie 1997). However, this requirement is often not met and the children enter bilingual education programmes with little or no language competence. Specific difficulties arise in interpreted education, where children attend regular classes in a mainstream school, supported by sign language interpreting. In this type of education, it is common to take the children’s sign language competence for granted, with little effort put into the teaching of this language. In practice, many children are required to learn the language whilst using the language to learn, receiving language input from adult models who are mostly not native users of the language (Cokely 2005). In addition, these children often lack the opportunity to develop one important component of bilingualism, namely the awareness of their own bilinguality and knowledge about contrasts between their two languages (i.e. metalinguistic awareness), as sign language is hardly ever included as a subject in its own right in the curriculum (Morales-López 2008).
4.2.2. Spoken language promotion In sign bilingual education programmes, the spoken/written language is commonly conceived of as a second language (L2) (Bagga-Gupta 2004; Günther et al. 1999, 2004; Knight/Swanwick 2002; Vercaingne-Ménard/Parison/Dubuisson 2005; Yang 2008; Plaza-Pust/Morales-Lopez 2008). In these programmes, the teaching of L2 literacy is approached via L1 sign language. In general, the written language is given prominence over the spoken language; however, socio-political pressure to promote the spoken language skills of deaf children varies from one country to another, affecting the status assigned to this modality in bilingual teaching (Plaza-Pust 2004). Hence, many educational professionals in bilingual education programmes are confronted with the ethical dilemma of how to deal with the political pressure to deliver good results in the spoken language, on the one hand, and their knowledge about the impact of hearing impair-
39. Deaf education and bilingualism ment on the effort required to learn the spoken language, on the other hand (Tellings 1995, 121). Another development affecting the status of the spoken language pertains to the increasing sophistication of technology ⫺ in particular, in the form of cochlear implants (CIs) (Knoors 2006). As more and more cochlear-implanted children attend bilingual programmes ⫺ a trend that reflects the overall increase in provision of these devices ⫺ the aims of bilingual education in relation to use of the spoken language need to be redefined to match the linguistic capabilities and needs of these children.
4.2.3. Languages of instruction One crucial variable in bilingual education pertains to the choice of the main language(s) of instruction (Baker 2001). In some bilingual education programmes for deaf children, all curriculum subjects are taught in sign language. In this case, the spoken/ written language is clearly defined as a subject in itself and taught as a second or foreign language. Other programmes, for example, those in Hamburg and Berlin (Günther et al. 1999), opt for a so-called ‘continuous bilinguality’ in the classroom, put into practice through team-teaching, with classes taught jointly by deaf and hearing teachers. In addition to spoken/written language and sign language, other codes and supportive systems of communication may be used, such as fingerspelling or Cued Speech (LaSasso/Lamar Crain/Leybaert 2010). The continuing use of spoken language-based signed systems, not only for teaching properties of the L2 but also for content matter, in those programmes where instruction through sign language is provided for part of the curriculum only, is subject of an ongoing heated debate. Opinions diverge on whether these systems are more useful than sign language in the promotion of spoken language acquisition in deaf children, given the ‘visualisation’ of certain grammatical properties of that language, as is argued by their advocates (Mayer/Akamatsu 2003). Critics maintain that the children are confronted with an artificial mixed system that presupposes knowledge of the language which is actually to be learned (Kuntze 1998; Wilbur 2000). Between these two positions, the benefit from utilising signed systems in the teaching of the spoken language is conceded by some, although with a clear disapproval of their use in place of sign language for the teaching of content matter.
4.2.4. Learners’ profiles Teachers are confronted with a heterogeneous population of learners, with marked individual differences not only in terms of degree of hearing loss, but also with respect to prior educational experiences, linguistic profiles, and additional learning needs. In some types of bilingual education (co-enrolment, interpreted education), deaf and hearing children are taught in the same classroom. Indeed, particularly in the US, a widespread alternative to bilingual education in special schools or self-contained classrooms in mainstream schools is the provision of sign language interpreters in mainstream classrooms. Interpreted education is also provided in Spain, particularly in secondary education. In Spain and other countries, the transition from primary to second-
959
960
VIII. Applied issues ary education involves a change of institution, and often also a change of bilingual methods used, as team-teaching found throughout primary education in some bilingual programmes is not available in secondary education (Morales-López 2008). Variation in learners’ profiles is often overlooked in these educational settings, even though adaptations to meet the linguistic abilities and learning needs of deaf children are necessary (Marschark et al. 2005). Co-enrolment of deaf and hearing children has been offered in Australia, the US, and in several countries in Europe (Ardito et al. 2008; de Courcy 2005; Krausneker 2008). While studies on this type of bilingual education coincide in the positive results obtained, which mirrors the findings reported for Dual Language programmes with minority and majority language children in the US (Baker 2007), these programmes are often offered for a limited time only. A bilingual programme in Vienna, for example, was discontinued after four years. For multiple reasons, including the temporary character of some bilingual programmes, or changes in orientation from primary to secondary education, many deaf children are exposed to diverse methods in the course of their development, often without preparation for changes affecting communication and teaching in their new classrooms (Gras 2008; Plaza-Pust 2004). Demographic changes relating to migration are also reflected in the changing deaf population (Andrews/Covell 2006). There is general awareness of the challenges this imposes on teaching and learning, in particular among professionals working in special education. However, both in terms of research and teaching, the lack of alignment of the spoken languages (and, at times, also sign languages) used at home and in school remains largely unaccounted for. It is clear that the concept of bilingual education, if taken literally (that is, involving two languages only) does not recognise the diversity that characterises deaf populations in many countries. Moreover, because of differences in educational systems, some deaf children from migrant families enrol in deaf schools without prior knowledge of any language because deaf education was not available in their country of origin. The increasing number of deaf children with cochlear implants (CI) ⫺ more than half of the population in the UK, for example (Swanwick/Gregory 2007) ⫺ adds a new dimension to the heterogeneity of linguistic profiles in deaf individuals. While most children are educated in mainstream settings, there are many CI children attending bilingual programmes, either because of low academic achievements in the mainstream or because of late provision of a CI. The generalised rejection of sign language in the education of these children in many countries contrasts with the continuing bilingual orientation of education policy in Sweden, where the views of professionals and parents of deaf children with CIs in favour of a bilingual approach follows a pragmatic reasoning that acknowledges not only the benefits of bilingual education but also that the CI is not a remedy for deafness and its long-term use remains uncertain (Svartholm 2007). In a similar vein, though based on the observation of remaining uncertainties concerning children’s eventual success in using CIs, Bavelier, Newport, and Supalla (2003) argue in favour of the use of sign language as a ‘safety net’.
4.2.5. Biculturalism An aspect that continues to be controversial, and is also of relevance in discussions about appropriate educational placements, concerns the notion of biculturalism in the
39. Deaf education and bilingualism education of deaf children (Massone 2008; Mugnier 2006). Whilst sign bilingual education is also bicultural for some educational professionals, the idea of deaf culture and related bicultural components of deaf education are rejected by others. There are diverging views about whether sign bilingualism is the intended outcome (following the model of maintenance bilingual education) or is intended to be a transitional phenomenon as an ‘educational tool’, as in other types of linguistic minority education (Baker 2001). The latter view, widespread among teaching professionals (see Mugnier (2006) for a discussion of the situation in France; Massone (2008) for Argentina), commonly attributes the status of a teaching tool to sign language, with no acknowledgment of culture. Apart from questions about the inclusion of deaf culture as an independent subject, the discussion also affects the role assigned to deaf teachers as adult role models, linguistically and culturally. As Humphries and MacDougall state (2000, 94): “The cultural in a ‘bilingual, bicultural’ approach to educating deaf children rests in the details of language interaction of teacher and student, not just in the enrichment of curriculum with deaf history, deaf literature, and ASL storytelling.”
4.2.6. Educational conceptions and policy Because sign bilingual education is not institutionalised in the majority of countries, its provision often depends on the interest and support of parents, on the one hand, and the expertise of specialists offering such programmes, on the other hand. Not only do these programmes often struggle for survival, but professionals working in these settings also face the task of developing their own curricula, teaching materials, and assessment tools (Komesaroff 2001; Morales-López 2008; Plaza-Pust 2004). In many cases, teachers in bilingual education programmes have little or no training in bilingualism in general, or sign bilingualism in particular. In sign bilingual education, written language is taught as an L2, but teachers are rarely taught the theoretical underpinnings of this type of language acquisition (Bagga-Gupta/Domfors 2003; Morales-López 2008). Contrastive teaching is assigned an important role, but there is a general lack of knowledge about research in sign language linguistics and the impact of critical language awareness on the developmental process, an issue that is the focus of education of other linguistic minorities (Siebert-Ott 2001). Whatever the role assigned to the different professionals involved in the teaching of deaf students, a (near-)native level of proficiency in sign language should be a requirement. However, for multiple reasons, this requirement is often not met, and in-service training is often insufficient to fill the skill and knowledge gaps (Bagga-Gupta/Domfors 2003). In general, there is agreement that many of the shortcomings must be addressed in the context of a coherent holistic planning of bilingual education involving all stakeholders (administration, teachers, parents, deaf association) with the aim of aligning the different measures that need to be taken, such as the provision of appropriate teacher training, the development of teaching materials specifically devised for sign bilingualism, and focus on the aspects that distinguish sign bilingualism from other forms of bilingualism (use of two different modalities, lack of access to the spoken modality, etc.) (Morales-López 2008). Clearly, the absence of co-ordinated action results in ineffective use of the human and financial resources available.
961
962
VIII. Applied issues It is also apparent that the aim of guaranteeing equity of access to all children often takes precedence over educational excellence. The objectives of the general trend toward educating deaf children in mainstream schools ⫺ namely, choice of the least restrictive environment, integration, and inclusion (Moores/Martin 2006) ⫺ have, since the 1970s, been increasingly regarded as preferable to segregation (Lane/Hoffmeister/ Bahan 1996), not only in many Western countries (Monaghan 2003) but also in countries such as Japan (Nakamura 2003, 211), with the effect that many special schools have been closed in recent years. In the US, the trend initiated through Public Law 94⫺142 (1975), which requires that education should take place in the least restrictive environment for all handicapped children, has resulted in more than 75 % of deaf children being educated in the mainstream (Marschark et al. 2005, 57), compared with 80 % of deaf children educated in residential schools before the 1960s in that country (Monaghan 2003). The pattern is similar in many other countries, for instance, in the UK, where only 8 % of deaf children are currently educated in special schools (Swanwick/Gregory 2007). Moores and Martin (2006) note, though, that this has been a long-term trend in the US, where education in mainstream settings began to be increasingly offered after the end of World War II, due to the increasing child population, including deaf children, and the favouring of classes for deaf children in mainstream schools rather than the building of additional residential schools. These observations point to the additional issue of the economics of education, which is often overlooked. To date, few cost-effectiveness studies have been undertaken (but see Odom et al. 2001). While limited attention is paid to the economics of bilingual education (Baker 2007), the limited discussion on the cost benefits of sign bilingual education can also be considered as an indication of the ideological bias of deaf educational discourse. A few scholars have speculated about whether the move toward mainstreaming was motivated by efforts to improve the quality of deaf education and promote deaf children’s integration, or was related to cost saving (Ladd 2003; Marschark et al. 2005). The provision of educational interpreting in mainstream or special educational settings is based on the assumption that through interpreting, deaf children are provided with the same learning opportunities as hearing children (Marschark et al. 2005). However, there is limited evidence concerning the effectiveness of educational interpreting and little is known about the impact of the setting, the children’s language skills, the interpreters’ language skills, and the pedagogical approach on information transmission. In their study of interactions between deaf and hearing peers outside the mainstream classroom, Keating and Mirus (2003) observed that deaf children were more skilful in accommodating to their hearing peers than vice versa and concluded that mainstreaming relies on an unexamined model of cross-modal communication. A major challenge to traditional concepts of bilingual education and the development of appropriate language planning measures concerns the increasing number of deaf children with CI. While this general trend and the revival of radical oralist views of deaf education are acknowledged, the long-term impact of CIs on educational programmes for deaf students is not yet clear. Although there is little research to provide empirical support for the appropriateness of this generalised choice, mainstreaming is the usual type of educational setting provided for children with a CI. Studies indicate
39. Deaf education and bilingualism substantial variation in the developmental progress of cochlear-implanted orally educated children. The individual differences observed suggest that some of them would certainly profit from the use of sign language in their education (Szagun 2001). As for children themselves, remarkably little is known about their educational preferences. Interview data are available from adult deaf people revealing their views of oralist education and special schools, but at the time of their education, bilingual approaches were not an option (Krausneker 2008; Lane/Pillard/French 2000; Panayiotis/ Aravi 2006). Mainstreamed children in Spain, when asked whether they would prefer interpreters or support from tutors (most of whom were competent in sign language) expressed their preference for the latter option (Morales-López 2008), highlighting the importance of face-to-face communication in the teaching and learning of content matter. As for the increasing number of deaf children with CIs, little is known about their views concerning the impact of a CI on their quality of life (Preisler 2007). Finally, it should be noted that most research has been orientated towards demonstrating the effectiveness of one educational option over another, reflecting the interdependence of research, policy, and practice (Bagga-Gupta 2004; Plaza-Pust/MoralesLópez 2008). Demonstrating the success of a bilingual approach may be of critical importance in ensuring its maintenance, even when it has not been appropriately implemented. A few scholars (Bagga-Gupta 2004; Moores 2007) have drawn attention to the dangers of ideological biases in research that should critically analyse and reflect.
5. Bilinguals Because of the diversity of factors determining the acquisition and use of sign language and spoken language in deaf individuals, bimodal bilingualism offers a rich field of research into the complex inter-relation of external sociolinguistic and internal psycholinguistic factors that shape the outcomes of language contact. Indeed, recent years have witnessed an increased interest in the study of this type of bilingualism. The next sections summarise the major findings obtained in the psycholinguistic and sociolinguistic studies conducted, in particular, concerning (i) developmental issues in the acquisition of the two languages, (ii) sociolinguistic aspects determining patterns of language use in the deaf communities, and (iii) (psycho-)linguistic characteristics of crossmodal language contact phenomena.
5.1. Bilingual learners: acquisition of sign language as an L1 and spoken/written language as an L2 Studies of hearing children’s bilingual development commonly concern bilingual families (Lanza 1997). The situation is markedly different regarding research into sign bilingualism, in which longitudinal studies of family bilingualism are rare, although there are exceptions, such as the studies conducted by Petitto and colleagues (2001) and Baker and van den Bogaerde (2008). Over the past decade, such factors as scarcity of bilingual educational settings and lack of appropriate measures of sign language knowl-
963
964
VIII. Applied issues edge (Chamberlain/Mayberry 2000; Singleton/Supalla 2003) have changed and an increasing number of studies have been conducted. The specific circumstances that determine exposure to and access to spoken/written language and sign languages in deaf children raise a number of issues concerning the status attributed to the two languages as L1 (sign language) and L2 (spoken/written language). The notions of mother tongue or first language(s) are commonly bound to criteria of age (first language(s) acquired) and environment (language(s) used at home), while full access to the language(s) learned is assumed. In the case of deaf children, however, accessibility should be considered the defining criterion of which language is their primary language, given that they can only fully access and naturally acquire sign languages (Berent 2004; Grosjean 2008; Leuninger 2000; Mahshie 1997). Age of exposure to sign language is a critical issue for the large majority of deaf children born to non-signing hearing or deaf parents. Whether they acquire sign language successfully through contact with signing peers or adult signers depends on such factors as parents’ choices about language, medical advice, and early intervention, in a context where the medical view of deafness prevails (Morales-López 2008; Yang 2008; Plaza-Pust 2004). Even in countries where bilingual education is available, many parents only learn about it later as an option for their deaf child, so that many children only enter such programmes when they are older. There are many reasons why deaf children are not exposed to sign language during the sensitive period for language acquisition. It has been argued that the full accessibility of the language may compensate for delay in exposure; however, there is evidence that late learners (5⫺10 years old) of a sign language as L1 may not become fully fluent (Mayberry 2007). Another issue concerns the potential impact of the early use of an artificial signed system at home and/or in pre-school on the later development of sign language. Despite these caveats, sign language is generally referred to as the L1 of bilingually educated deaf children. With respect to the status of the spoken/written language, there is some consensus that written language can be directly acquired by bilingually educated deaf children as an L2 (Leuninger 2000; Plaza-Pust 2008; Vercaingne-Ménard/Parison/Dubuisson 2005). However, there is little agreement on whether deaf children can compensate for the lack of access to spoken language by taking other pathways in the learning of written language to successfully acquire L2 grammar. Many researchers discuss deaf children’s written language development in the context of the impact of hearing loss, specifically in relation to the role of phonological awareness in the development of literacy (Musselman 2000; Mayer 2007). A different view is adopted by others who emphasise the need to look at written language in its own right. Günther (2003), for example, maintains that although written language is related to spoken language, it is an autonomous semiotic system. Learners must ‘crack the code’ along the lines proposed for other acquisition situations, that is, they have to identify the relevant units of each linguistic level, the rules that govern their combination, as well as the inter-relation of the different linguistic levels of analysis. Both innate knowledge and linguistic environment are assumed to contribute to this process (Plaza-Pust 2008). While numerous studies provide detailed accounts of error types in deaf children’s written productions, few scholars discuss their findings about deaf children’s acquisition of the written language in relation to linguistic theory. Wilbur (2000) distinguishes internal and external sources of errors in the writing. As the errors found resemble
39. Deaf education and bilingualism the rule-based errors found in learner grammars of hearing L2 learners (i.e. omissions or overgeneralisations), it is assumed that they reflect developmental stages. However, the characteristic long-term persistence of these errors is reminiscent of plateau or fossilisation effects in second language learner grammars and suggests that the development of the written language by deaf children might be delayed or truncated as a result of restricted quantity of the input and a deficit in the quality of input. Following this line of reasoning, it has been argued that the traditional teaching of written language structures in isolation with a focus on formal correctness is at the expense of creative uses of written language which would allow deaf children to acquire subtle grammatical and pragmatic properties (Günther et al. 2004; Wilbur 2000). Studies that specifically address similarities and differences between deaf learners and other learners continue to be rare. In longitudinal studies, both Krausneker (2008) and Plaza-Pust (2008) compare deaf learners’ L2 written language development with that of other L2 learners of the same language in order to ascertain whether the underlying language mechanisms are the same. Krausneker (2008) directly compared hearing and deaf children’s development of L2 German. She explains differences between the learner groups in developmental progress as resulting from differences in the amount of input available: while hearing children are continuously exposed to the L2, deaf children’s input and output in this language are much more restricted. Plaza-Pust (2008) found that the bilingually educated deaf children in her study acquired the target German sentence structure like other learners, but with marked differences in individual progress. She argues that variation in the learners’ productions is an indicator of the dynamic learning processes that shape the organisation of language, as has been shown to be the case in other contexts of language acquisition. Additionally, where written language serves as the L2, the question of the potential role of sign language (L1) in its development is fundamental to an appropriate understanding of how deaf children may profit from their linguistic resources in the course of bilingual development.
5.2. Bilingual development: pooling of linguistic resources Over recent years, several hypotheses have been put forward with respect to positive and negative effects of cross-modal language interaction in sign bilingual development (Plaza-Pust 2008). In research on bilingualism in two spoken languages, this is usually expressed as a facilitating or accelerating vs. a delaying effect in the learning of target language properties (Odlin 2003). A variety of terminology is found in the literature, including that concerned with sign bilingualism, to refer to different types of interaction between two or more languages in the course of bilingual development, such as ‘language transfer’, ‘linguistic interference’, ‘cross-linguistic influence’, ‘code-mixing’, and ‘linguistic interdependence’. Many of these terms have negative connotations which indicate attitudes toward bilingualism and bilinguals’ language use and reflect a common view that the ‘ideal’ bilingual is two monolinguals in one person who should keep his two languages separate at all times. Studies on language contact phenomena in interactions among adult bilinguals, including bilingual signers, and in the productions of bilingual learners have shown that language mixing is closely tied to the organisation of language on the one hand, and
965
966
VIII. Applied issues to the functional and sociolinguistic dimensions of language use on the other hand (Grosjean 1982, 2008; Tracy/Gawlitzek-Maiwald 2000), with a general consensus that bilingual users, including bilingual learners, exploit their linguistic resources in both languages.
5.2.1. Cross-modal language borrowing Following a long debate about separation or fusion of languages in early bilingual acquisition (Tracy/Gawlitzek-Maiwald 2000; Meisel 2004), there is a consensus that both languages develop separately from early on. This is supported by longitudinal studies on the acquisition of diverse language pairs, including the acquisition of sign language and spoken language in hearing children (Petitto et al. 2001). More recently, some researchers have studied language mixing in young bilinguals and concluded that their languages may temporarily interact in the course of bilingual development (Genesee 2002). Particularly in cases where there is asymmetry in the development of the two languages, learners may use a ‘relief strategy’ of temporarily borrowing lexical items or structural properties from the more advanced language. It has also been argued that the sophisticated combination of two distinct grammars in learners’ mixed utterances not only reveals the structures available in each language, but also that learners know, by virtue of their innate language endowment (i.e. Universal Grammar), that grammars are alike in fundamental ways. In relation to bilingual acquisition of sign language and spoken/written language by deaf children, some scholars acknowledge in general terms that L1 sign language knowledge, drawing on Universal Grammar, might reduce the complexity of acquiring a written language as an L2 (Wilbur 2000), but do not consider cross-linguistic influence or borrowing as outlined above. Günther and colleagues (1999) maintain that sign language serves a critical role in the bilingual development of deaf children. Based on a study of the writing of deaf children attending the Hamburg bilingual programme, they claim that these children profit from their knowledge of German Sign Language (DGS) in two respects. Firstly, they benefit from general knowledge attained through this language (general world knowledge and also knowledge about story grammar) in their production of written narratives (Günther et al. 2004). Secondly, they compensate for gaps in their written language grammar by borrowing sign language structures. Crucially, the authors show that DGS influence was a temporary phenomenon; as the learners’ knowledge of written German increased, the use of DGS borrowings decreased. Plaza-Pust and Weinmeister (2008) specifically address the issue of cross-modal borrowing in relation to learners’ grammatical development in both languages. Their analysis of signed and written narratives (collected in the context of a longitudinal investigation into bilingual acquisition of DGS and written German by deaf children attending the bilingual programme in Berlin) shows that lexical and structural borrowings occur at specific developmental phases in both languages, with structural borrowings decreasing as learners progress. Once the target structural properties are established, language mixing serves other, pragmatic functions (code-switching). The use of language mixing both in signed and written productions was relatively infrequent. For one participant, no language mixing was observed in either DGS or German. Individ-
39. Deaf education and bilingualism ual variation shows patterns similar to those described in research on the bilingual acquisition of two spoken languages (Genesee 2002). Finally, the data reveal a gradual development of the grammars of both languages, with differences among learners in the extent of development in the time covered by the study.
5.2.2. Inter-dependence of sign language and literacy skills Academic disadvantages resulting from a mismatch between L1 and L2 skills are most pronounced in linguistic minority members, in particular, in the case of socially stigmatised minorities (Saiegh-Haddad 2005). Cummins’ Interdependence Hypothesis (1991), which sees a strong foundation in the L1 as a prerequisite for bilingual children’s academic success, targets functional distinctions in language use and their impact on academic achievements in acquisition situations in which the home language (L1) differs from the language (L2) used in school. Because the theoretical justification for a bilingual approach to the education of linguistic minority and deaf children bears important similarities (Strong/Prinz 2000, 131; Kuntze 1998), the Interdependence Hypothesis has been widely used in the field of deaf education. As the role of spoken language in the linguistic and academic development of deaf children is limited, including reading development, the promotion of sign language as a base or primary language, although not the language used at home in the majority of cases, is fundamental to deaf children’s cognitive and communicative development (Hoffmeister 2000; Niederberger 2008). As for Cummins’ Common Underlying Proficiency Hypothesis, which assumes cognitive academic proficiency to be interdependent across languages, it is important to note that it is not a monolithic ability but rather involves a number of components, making it necessary to carefully examine the skills that might be involved in the ‘transfer process’. In research on sign bilingual development, the identification of the skills that might belong to common underlying proficiency is further complicated by the fact that sign language, the L1 or base language, has no written form that might be used in literacy related activities. Thus the notion of transfer or interaction of academic language skills needs to be conceived of independently of print. This in turn has led to a continuing debate about whether or not sign language can facilitate the acquisition of L2 literacy (Chamberlain/Mayberry 2000; Mayer/Akamatsu 2003; Niederberger 2008). The positive correlations between written language and sign language found in studies of ASL and English (Hoffmeister 2000; Strong/Prinz 2000) and other language pairs (Dubuisson/Parisot/Vercaingne-Ménard (2008) for Quebec Sign Language (LSQ) and French; Niederberger (2008) for French Sign Language (LSF) and French) have provided support for the assumption that good performances in both languages are indeed linked. As for the specific sign language skills associated with specific literacy skills, several proposals have been put forward. Given the differences between the languages at the level of the modality of expression and organisation, some authors assume that interaction or transfer mainly operate at the level of story grammar and other narrative skills (Wilbur 2000). Other researchers believe that the interaction relates to more specifically linguistic skills manifested in the comprehension and production of sign language and written language (Chamberlain/Mayberry 2000; Hoffmeister 2000; Strong/Prinz 2000). Higher correlations were obtained between narrative comprehen-
967
968
VIII. Applied issues sion and production levels in ASL and English reading and writing levels than between ASL morphosyntactic measures and English reading and writing. Niederberger (2008) reported a significant correlation of global scores in LSF and French and observed that correlations between narrative skills in both languages were higher than those relating to morphosyntactic skills. Additionally, sign language comprehension skills were more highly correlated with French reading and writing skills than with sign language production skills. Given that LSF narrative skills also correlated with French morphosyntactic skills, the interaction of both languages was assumed to involve more than global narrative skills. The study by Dubuisson, Parisot, and Vercaingne-Ménard (2008) on the use of spatial markers in LSQ (taken as an indicator of global proficiency in LSQ) and higher level skills in reading comprehension showed a relationship between improvement in children’s use of spatial markers in LSQ and their ability to infer information when reading French. With respect to global ability in the use of space in LSQ and global reading comprehension, the authors reported a highly significant correlation in the first year of the study. More specifically, a correlation was found between the ability to assign loci in LSQ and the ability to infer information in reading. In a two-year followup, they observed a correlation between locus assignment in LSQ and locating information in reading, and between global LSQ scores and locating information in reading. In summary, the results obtained from these studies show a relation between sign language development and literacy skills. However, they do not provide any direct information about the direction of the relationship, and some of the relations in the data remain unaccounted for at a theoretical level, in particular where the links concern grammatical properties and higher level processes.
5.2.3. Language contacts in the classroom and the promotion of metalinguistic skills In the course of bilingual development, children learn the functional and pragmatic dimensions of language use and develop the capacity to reflect upon and think about language, commonly referred to as metalinguistic awareness. From a developmental perspective, the ability to monitor speech (language choice, style), which appears quite early in development, can be distinguished from the capacity to express and reflect on that knowledge (Lanza 1997, 65). It is important to note that metalinguistic awareness is not attained spontaneously but is acquired through reflection on structural and communicative characteristics of the target language(s) in academic settings (Ravid/Tolchinsky 2002). Thus with respect to the education of deaf children, the question arises as to whether and how these skills are promoted in the classroom. One salient characteristic of communication in the sign bilingual classroom is that it involves several languages and codes (see section 4.2). This diversity raises two fundamental issues about communication practices in the classroom and the children’s language development. On the one hand, because modality differences alone cannot serve as a clear indicator of language, scholars have remarked on the importance of structural and pragmatic cues in providing information about differences between the languages. In particular, distinctive didactic roles for the different languages and codes used in the classroom seem to be fundamental for successful bilingual development.
39. Deaf education and bilingualism
969
On the other hand, Padden and Ramsey (1998) state that associations between sign language and written language must be cultivated. Whether these associations pertain to the link between fingerspelling and the alphabetic writing system, or to special registers and story grammar in both languages, an enhanced awareness of the commonalities and differences will help learners to skilfully exploit their linguistic resources in the mastery of academic content. While language use in classrooms for deaf children is complex (Ramsey/Padden 1998), studies of communication practices in the classroom show that language contact is used as a pedagogical tool: teachers (deaf and hearing) and learners creatively use their linguistic resources in dynamic communication situations, and children learn to reflect about language, its structure and use. Typically, activities aimed at enhancing the associations between the languages involve their use in combination with elements of other codes, as in the use of teaching techniques commonly referred to as chaining (Padden/Ramsey 1998; Humphries/MacDougall 2000; Bagga-Gupta 2004) or sandwiching, where written, fingerspelled, and spoken/mouthed items with the same referent follow each other. An illustration of this technique is provided in (1) (adapted from Humphries/MacDougall 2000, 89). (1)
volcano ⫺ initialized C sign
v-o-l-c-a-n-o ⫺ fingerspelling C
‘volcano’ point ⫺ point to printed C word
v-o-l-c-a-n-o fingerspelling
Particularly in the early stages of written language acquisition, knowledge of and attention to the relationships between the different languages and codes become apparent in the communication between the teachers and the children: children use sign language in their enquiries about translation equivalents; once the equivalence in meaning is agreed upon, fingerspelling is used to confirm the correct spelling of the word. At times, children and teachers may also use spoken words or mouthings in their enquiries. The following example describes code-switching occurring upon the request of a ‘newcomer’ deaf student: Roy […] wanted to spell ‘rubber’. He invoked the conventional requesting procedure, waving at Connie [the deaf teacher] and repeatedly signing ‘rubber’. […] As soon as she turned her gaze to him, Roy began to sign again. She asked for clarification, mouthing ‘rubber? rubber?’, then spelled it for him. He spelled it back, leaving off the final ‘r’. She assented to the spelling, and he began to write. John, also at the table and also experienced with signed classroom discourse, had been watching the sequence as well, and saw that Roy had missed the last ‘r’ just before Connie realized it. John signalled Connie and informed Roy of the correction. (Ramsey/Padden 1998, 18)
During text comprehension and production activities, teachers and children move between the languages. For example, teachers provide scaffolding through sign language during reading activities, including explanations about points of contrast between spoken language and sign language. Bagga-Gupta (2004) describes chaining of the two languages in a simultaneous or synchronised way, for example, by periodically switching between the two languages or ‘visually reading’ (signing) a text. Mugnier (2006) analyses the dynamics of bilingual communication in LSF and French during text comprehension activities in classes taught by a hearing or a deaf
970
VIII. Applied issues teacher. LSF and French were used by the children in both situations. However, while the deaf teacher validated the children’s responses in either language, the hearing teacher only confirmed the correctness of spoken responses. Teacher-student exchanges including metalinguistic reflection about the differences between the languages only occurred in interaction with the deaf teacher. It was occasionally observed that in communication with the hearing teacher, the children engaged with each other in a parallel conversation, with no participation by the teacher. Millet and Mugnier (2004) conclude that children do not profit from their incipient bilingualism by the simple juxtaposed presence of the languages in the classroom, but benefit where language alternation is a component of the didactic approach. The dynamics of bilingual communication in the classroom also has a cultural component. As pointed out by Ramsey and Padden (1998, 7) “a great deal of information about the cultural task of knowing both ASL and English and using each language in juxtaposition to the other is embedded in classroom discourse, in routine ‘teacher talk’ and in discussions”.
5.3. Sociolinguistic aspects of cross-modal language contact Language choice in bilingual interactions is a complex phenomenon. For bilinguals, variation in situation induces different language modes. Whether bilinguals choose one language or another as a base language is related to a number of factors such as their fluency in the two languages, the conversational partners, the situation, the topic, and the function of the interaction (Fontana 1999; Grosjean 1982, 2008; Romaine 1996). For deaf bilinguals, limitations on perception and production of the spoken language condition its choice as a base language. Thus, in interactions with other sign bilinguals, code-switching for stylistic purposes or communicative efficiency can involve mouthing or fingerspelling. Code-switching provides an additional communication resource when clarification is required (Grosjean 1982). For this purpose, bilingual signers may combine elements of different codes, that is, sign language, mouthing, and fingerspelling (see chapter 35, Language Contact and Borrowing, fur further discussion). An interesting example, indicating that cross-cultural differences are reflected in preferences for specific combinations over others, is described by Yang (2008) with respect to contact between Chinese Sign Language (CSL) and Chinese. Like other sign bilinguals, Chinese/CSL bilinguals combine CSL elements and mouthings. However, they also code-switch to written Chinese by tracing the strokes of Chinese characters in the air or on the palm of the hand, a strategy that is also common among hearing Chinese to disambiguate homophonic words. Studies of the use of mixed varieties, initially referred to as ‘pidgin sign language’ and later as ‘contact signing’ (also see chapter 35) have shown that the hearing status of the conversational partner is not the sole criterion determining language choice in situations of sign language and spoken/written language contact (Fontana 1999). Lucas and Valli (1989) report on the use of ASL by some deaf participants in their communication with a hearing interviewer, and on the use of contact signing with a deaf interviewer even where both the interviewers and the participants were fluent in ASL and the participants were addressed by the deaf interviewer in ASL. According to Lucas and Valli, language choice in any of these cases is determined by sociolinguistic factors,
39. Deaf education and bilingualism such as the formality of an interview situation or the use of a particular language as a marker of identity. Similar patterns of language use have been reported with respect to other linguistic minorities (see Grosjean (1982) for a discussion of the factors that determine language choice in bilingual settings). There has been limited research outside the US into the sociolinguistic factors determining language choice. However, studies of loan vocabulary indicate the influence of different teaching practices in deaf education. For example, the widespread use of fingerspelling as a teaching tool in the US is reflected in the frequency of specific types of cross-modal language contact phenomena, such as initialisation (Lucas/Valli 1992), which contrasts with the widespread use of mouthing in the sign languages of Germanspeaking countries, reflecting the spoken language orientation in deaf education in these countries. LeMaster’s (2003) study of differences between men and women’s signing indicates the impact of educational philosophy and segregation by gender. In Dublin, girls and boys attended different schools, which is reflected in lexical differences (see chapter 33, Sociolinguistic Aspects of Variation and Change, for details). From a sociolinguistic perspective, where two linguistic communities do not live in regionally separate areas, language contact phenomena can be considered to be a natural outcome. This holds equally of the intricate relationship between the hearing and Deaf communities. However, where languages in a given social space do not have the same status, language mixing may not only be associated with a lack of competence at the individual level, but it may also be perceived as a cause of language loss (Baker 2001; Grosjean 1982; Romaine 1996). The situation is markedly different in communities where bimodal bilingualism develops primarily as a result of a high incidence of deafness (Branson/Miller/Marsaja 1999; Woodward 2003; see chapter 24, Shared Sign Languages, for further discussion). Kisch (2008) describes the complexity of language profiles she encountered in her anthropological study of the Al-Sayyid community (Israel). Her observations of the interactions between hearing and deaf, competent and less competent signers point to a situation of intensive language contact, with dynamic movement between the languages. With respect to linguistic profiles, Kisch observes an asymmetry between deaf individuals, who are usually monolingual in sign language, and hearing individuals, who are bilingual in spoken language and sign language. This type of language contact situation results in a ‘reverse’ pattern to that found in the language profiles of linguistic minority members in other social contexts where the minority language community members are bilingual, while the majority members are usually monolingual. This demonstrates how sociolinguistic and educational factors affect the patterns of language use in a given social space. Indeed, cases of such ‘village sign languages’ offer the opportunity to study the outcomes of language contact in situations without language planning targeting sign language. Also, in such communities, language behaviour is neither determined by the stigmatisation of deafness nor by the concept of a Deaf community and related notions of attitudinal deafness. As more and more deaf children in such communities are exposed to deaf education, it will be interesting to see how this affects their language development and use, and, eventually, also the communication patterns in the social context of the village in light of the discussion of how the establishment of the American School for the Deaf was an important factor in the disappearance of the Martha’s Vineyard deaf community (Lane/Pillard/French 2000).
971
972
VIII. Applied issues
6. Conclusion Human beings, deaf or hearing, have an innate predisposition to acquire one or more languages. Variation in linguistic profiles of deaf individuals, ranging from competence in two or more languages to rudimentary skills in only one language, indicates how innate and environmental factors conspire in the development and maintenance of a specific type of bilingualism that is characterised by the fragile pattern of transmission of sign languages and the unequal status of sign language and spoken/written language in terms of their accessibility. Today, the diversity of approaches to communication in the education of deaf students ranges from a strictly monolingual (oralist) to a (sign) bilingual model of deaf education. Variation in the choice of the languages of instruction and educational placement reveals that diverse, and often conflicting, objectives need to be reconciled with the aim of guaranteeing equity of access and educational excellence to a heterogeneous group of learners, with marked differences in terms of their degree of hearing loss, prior educational experiences, linguistic profiles, and additional learning needs. Demographic changes relating to migration and the increasing number of children with cochlear implants add two new dimensions to the heterogeneity of the student population that need to be addressed in the educational domain. While bilingualism continues to be regarded as a problem by advocates of a monolingual (oral only) education of deaf students, studies into the bimodal bilingual development of deaf learners have shown that sign language does not negatively affect spoken/written language development. Statistical studies documenting links between skills in the two languages and psycholinguistic studies showing that learners temporarily fill gaps in their weaker language by borrowing from their more advanced language further indicate that deaf learners, like their hearing bilingual peers, creatively pool their linguistic resources. Later in their lives, bilingual deaf individuals have been found to benefit from their bilingualism as they constantly move between the deaf and the hearing worlds, code-switching between the languages for stylistic purposes or communicative efficiency.
7. Literature Aarons, Debra/Reynolds, Louise 2003 South African Sign Language: Changing Policy and Practice. In: Monaghan, Leila/ Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet University Press, 194⫺210. Andrews, Jean F./Covell, John A. 2006 Preparing Future Teachers and Doctoral Level Leaders in Deaf Education: Meeting the Challenge. In: American Annals of the Deaf 151(5), 464⫺475. Ann, Jean 2001 Bilingualism and Language Contact. In: Lucas, Ceil (ed.), The Sociolinguistics of Sign Languages. Cambridge: Cambridge University Press, 33⫺60. Anthony, David 1971 Seeing Essential English. Anaheim, CA: Anaheim Union High School District.
39. Deaf education and bilingualism Ardito, Barbara/Caselli, M. Cristina/Vecchietti, Angela/Virginia, Volterra 2008 Deaf and Hearing Children: Reading Together in Preschool. In: Plaza-Pust, Carolina/ Morales-López, Esperanza (eds.), Sign Bilingualism: Language Development, Interaction, and Maintenance in Sign Language Contact Situations. Amsterdam: Benjamins, 137⫺164. Bagga-Gupta, Sangeeta 2004 Literacies and Deaf Education: A Theoretical Analysis of the International and Swedish Literature. In: Forskning I Fokus 23. The Swedish National Agency for School Improvement. Bagga-Gupta, Sangeeta/Domfors, Lars-Ake 2003 Pedagogical Issues in Swedish Deaf Education. In: Monaghan, Leila/Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet University Press, 67⫺88. Baker, Anne/Bogaerde, Beppie van den 2008 Code-mixing in Signs and Words in Input to and Output from Children. In: Plaza-Pust, Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism: Language Development, Interaction, and Maintenance in Sign Language Contact Situations. Amsterdam: Benjamins, 1⫺27. Baker, Colin 2001 Foundations of Bilingual Education and Bilingualism. Clevedon: Multilingual Matters. Baker, Colin 2007 Becoming Bilingual through Bilingual Education. In: Auer, Peter/Wie, Li (eds.), Handbook of Multilingualism and Multilingual Communication. Berlin: Mouton de Gruyter, 131⫺152. Bavelier, Daphne/Newport, Elissa L./Supalla, Ted 2003 Signed or Spoken, Children Need Natural Languages. In: Cerebrum 5, 15⫺32. Berent, Gerald P. 2004 Sign Language ⫺ Spoken Language Bilingualism: Code Mixing and Mode Mixing by ASL-English Bilinguals. In: Bhatia, Tej K./Ritchie, William C. (eds.), The Handbook of Bilingualism. Oxford: Blackwell, 312⫺335. Berenz, Norine 2003 Surdos Venceremos: the Rise of the Brazilian Deaf Community. In: Monaghan, Leila/ Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet University Press, 173⫺192. Bogaerde, Beppie van den/Baker, Anne E. 2002 Are Young Deaf Children Bilingual? In: Morgan, Gary/Woll, Bencie (eds.), Directions in Sign Language Acquisition. Amsterdam: Benjamins, 183⫺206. Branson, Jan/Miller, Don/Marsaja, I Gede 1999 Sign Language as a Natural Part of the Mosaic: The Impact of Deaf People on Discourse Forms in North Bali, Indonesia. In: Winston, Elizabeth (ed.), Storytelling and Conversation: Discourse in Deaf Communities. Washington, DC: Gallaudet University Press, 109⫺148. Chamberlain, Charlene/Mayberry, Rachel I. 2000 Theorizing About the Relation Between American Sign Language and Reading. In: Chamberlain, Charlene/Morford, Jill P./Mayberry, Rachel I. (eds.), Language Acquisition by Eye. Mahwah, NJ: Lawrence Erlbaum, 221⫺260. Cokely, Dennis 2005 Shifting Positionality: A Critical Examination of the Turning Point in the Relationship of Interpreters and the Deaf Community. In: Marschark, Marc/Peterson, Rico/Winston, Elizabeth (eds.), Sign Language Interpreting and Interpreter Education: Directions for Research and Practice. Oxford: Oxford University Press, 3⫺28.
973
974
VIII. Applied issues Courcy, Michèle de 2005 Policy Challenges for Bilingual and Immersion Education in Australia: Literacy and Language Choices for Users of Aboriginal Languages, Auslan and Italian. In: The International Journal of Bilingual Education and Bilingualism 8(2⫺3), 178⫺187. Cummins, Jim 1991 Interdependence of First- and Second-Language Proficiency in Bilingual Children. In: Bialystok, Ellen (ed.), Language Processing in Bilingual Children. Cambridge: Cambridge University Press, 70⫺89. Dubuisson, Colette/Parisot, Anne-Marie/Vercaingne-Ménard, Astrid 2008 Bilingualism and Deafness: Correlations Between Deaf Students’ Ability to Use Space in Quebec Sign Language and their Reading Comprehension in French. In: Plaza-Pust, Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism: Language Development, Interaction, and Maintenance in Sign Language Contact Situations. Amsterdam: Benjamins, 51⫺71. Emmorey, Karen 2002 Language, Cognition, and the Brain. Mahwah, NJ: Lawrence Erlbaum. Erting, Carol J./Kuntze, Marlon 2008 Language Socialization in the Deaf Communities. In: Duff, Patricia A./Hornberger, Nancy H. (eds), Encyclopedia of Language and Education (2 nd Edition), Volume 8: Language Socialization. Berlin: Springer, 287⫺300. Fischer, Susan D. 1998 Critical Periods for Language Acquisition: Consequences for Deaf Education. In: Weisel, Amatzia (ed.), Issues Unresolved: New Perspectives on Language and Deaf Education. Washington, DC: Gallaudet University Press, 9⫺26. Fontana, Sabina 1999 Italian Sign Language and Spoken Italian in Contact: An Analysis of Interactions Between Deaf Parents and Hearing Children. In: Winston, Elizabeth (ed.), Storytelling and Conversation: Discourse in Deaf Communities. Washington, DC: Gallaudet University Press, 149⫺161. Gascón-Ricao, Antonio/Storch de Gracia y Asensio, José Gabriel 2004 Historia de la Educación de los Sordos en España y su Influencia en Europa y América. Madrid: Editorial Universitaria Ramón Areces. Genesee, Fred 2002 Portrait of the Bilingual Child. In: Cook, Vivian (ed.), Portraits of the L2 User. Clevedon: Multilingual Matters, 167⫺196. Gras, Victòria 2008 Can Signed Language Be Planned? Implications for Interpretation in Spain. In: PlazaPust, Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism: Language Development, Interaction, and Maintenance in Sign Language Contact Situations. Amsterdam: Benjamins, 165⫺193. Grosjean, Francois 1982 Life with Two Languages. Cambridge, MA: Harvard University Press. Grosjean, Francois 2008 Studying Bilinguals. Oxford: Oxford University Press. Günther, Klaus-B. 2003 Entwicklung des Wortschreibens bei Gehörlösen und Schwerhörigen Kindern. In: Forum 11, 35⫺70. Günther, Klaus-B./Staab, Angela/Thiel-Holtz, Verena/Tollgreef, Susanne/Wudtke, Hubert (eds.) 1999 Bilingualer Unterricht mit Gehörlosen Grundschülern: Zwischenbericht zum Hamburger Bilingualen Schulversuch. Hamburg: Hörgeschädigte Kinder.
39. Deaf education and bilingualism Günther, Klaus-B./Schäfke, Ilka/Koppitz, Katharina/Matthaei, Michaela 2004 Vergleichende Untersuchungen zur Entwicklung der Textproduktionskompetenz und Erzählkompetenz. In: Günther, Klaus-B./Schäfke, Ilka (eds.), Bilinguale Erziehung als Förderkonzept für Gehörlose SchülerInnen: Abschlussbericht zum Hamburger Bilingualen Schulversuch. Hamburg: Signum, 189⫺267. Gustason, Gerrilee/Zawolkow, Esther (eds.) 1980 Using Signing Exact English in Total Communication. Los Alamitos, CA: Modern Signs Press. Hoffmeister, Robert J. 2000 A Piece of the Puzzle: ASL and Reading Comprehension in Deaf Children. In: Chamberlain, Charlene/Morford, Jill P./Mayberry, Rachel I. (eds.), Language Acquisition by Eye. Mahwah, NJ: Lawrence Erlbaum, 143⫺163. Humphries, Tom/MacDougall, Francine 2000 “Chaining” and Other Links Making Connections Between American Sign Language and English in Two Types of School. In: Visual Anthropology Review 15(2), 84⫺94. Johnson, Robert E./Liddell, Scott K./Erting, Carol J. 1989 Unlocking the Curriculum: Principles for Achieving Access in Deaf Education. Washington, DC: Gallaudet University Press. Johnston, Trevor 2003 Language Standardization and Signed Language Dictionaries. In: Sign Language Studies 3(4), 431⫺468. Keating, Elizabeth/Mirus, Gene 2003 Examining Interactions Across Language Modalities: Deaf Children and Hearing Peers at School. In: Anthropology & Education Quarterly 34(2), 115⫺135. Kisch, Shifra 2008 ‘Deaf Discourse’: The Social Construction of Deafness in a Bedouin Community. In: Medical Anthropology 27(3), 283⫺313. Kiyaga, Nassozi B./Moores, Donald F. 2003 Deafness in Sub-Saharan Africa. In: American Annals of the Deaf 148(1), 18⫺24. Knight, Pamela/Swanwick, Ruth 2002 Working with Deaf Pupils: Sign Bilingual Policy into Practice. London: David Fulton. Knoors, Harry 2006 Educational Responses to Varying Objectives of Parents of Deaf Children: A Dutch Perspective. In: Journal of Deaf Studies and Deaf Education 12, 243⫺253. Komesaroff, Linda 2001 Adopting Bilingual Education: An Australian School Community’s Journey. In: Journal of Deaf Studies and Deaf Education 6(4), 299⫺314. Krausneker, Verena 2008 Language Use and Awareness of Deaf and Hearing Children in a Bilingual Setting. In: Plaza-Pust, Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism: Language Development, Interaction, and Maintenance in Sign Language Contact Situations. Amsterdam: Benjamins, 195⫺221. Kuntze, Marlon 1998 Codeswitching in ASL and Written English Language Contact. In: Emmorey, Karen/ Lane, Harlan (eds.), The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum, 287⫺302. Ladd, Paddy 2003 Understanding Deaf Culture: In Search of Deafhood. Clevedon: Multilingual Matters. Lane, Harlan/Hoffmeister, Robert/Bahan, Ben 1996 A Journey Into the Deaf-World. San Diego, CA: DawnSignPress. Lane, Harlan/Pillard, Richard/French, Mary 2000 Origins of the American Deaf-World: Assimilating and Differentiating Societies and Their Relation to Genetic Patterning. In: Sign Language Studies 1(1), 17⫺44.
975
976
VIII. Applied issues Lang, Harry G. 2003 Perspectives on the History of Deaf Education. In: Marschark, Marc/Spencer, Patricia (eds.), Oxford Handbook of Deaf Studies, Language, and Education. Oxford: Oxford University Press, 9⫺20. Lanza, Elizabeth 1997 Language Mixing in Infant Bilingualism: A Sociolinguistic Perspective. Oxford: Clarendon. LaSasso, Carol/Lamar Crain, Kelly/Leybaert, Jacqueline 2010 Cued Speech and Cued Language for Deaf and Hard of Hearing Children. San Diego, CA: Plural Publishing. LeMaster, Barbara 2003 School Language and Shifts in Irish Deaf Identity. In: Monaghan, Leila/Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet University Press, 153⫺172. Leuninger, Helen 2000 Mit den Augen Lernen: Gebärdenspracherwerb. In: Grimm, Hannelore (ed.), Enzyklopädie der Psychologie. Bd. IV: Sprachentwicklung. Göttingen: Hogrefe, 229⫺270. Lucas, Ceil/Valli, Clayton 1989 Language Contact in the American Deaf Community. In: Lucas, Ceil (ed.), The Sociolinguistics of the Deaf Community. San Diego, CA: Academic Press, 11⫺40. Lucas, Ceil/Valli, Clayton 1992 Language Contact in the American Deaf Community. New York, NY: Academic Press. Mahshie, Shawn Neal 1997 A First Language: Whose Choice Is It? (A Sharing Ideas Series Paper, Gallaudet University, Laurent Clerk National Deaf Education Center). [Retrieved 19 February 2003 from: http://clerccenter.gallaudet.edu /Products/Sharing-Ideas/index.html] Marschark, Marc/Sapere, Patricia/Convertino, Carol/Rosemarie, Seewagen 2005 Educational Interpreting: Access and Outcomes. In: Marschark, Marc/Peterson, Rico/ Winston, Elizabeth (eds.), Sign Language Interpreting and Interpreter Education: Directions for Research and Practice. Oxford: Oxford University Press, 57⫺83. Mayberry, Rachel 2007 When Timing Is Everything: Age of First-language Acquisition Effects on Second-language Learning. In: Applied Psycholinguistics 28, 537⫺549. Mayer, Connie 2007 What Really Matters in the Early Literacy Development of Deaf Children. In: Journal of Deaf Studies and Deaf Education 12(4), 411⫺431. Mayer, Connie/Akamatsu, Tane 2003 Bilingualism and Literacy. In: Marschark, Marc/Spencer, Patricia (eds.), Oxford Handbook of Deaf Studies, Language, and Education. Oxford: Oxford University Press, 136⫺147. Meisel, Jürgen M. 2004 The Bilingual Child. In: Bhatia, Tej K./Ritchie, William C. (eds.), The Handbook of Bilingualism. Oxford: Blackwell, 91⫺113. Millet, Agnès/Mugnier, Saskia 2004 Français et Langue des Signes Française (LSF): Quelles Interactions au Service des Compétences Langagières? Etude de Cas d’une Classe d’Enfants Sourds de CE2. In: Repères 29, 1⫺20. Mohanty, Ajit K. 2006 Multilingualism of the Unequals and Predicaments of Education in India: Mother Tongue or Other Tongue? In: Garcıa, Ofelia/Skutnabb-Kangas, Tova/Torres-Guzman, Maria (eds.), Imagining Multilingual Schools: Languages in Education and Globalization. Clevedon: Multilingual Matters, 262⫺283.
39. Deaf education and bilingualism Monaghan, Leila 2003 A World’s Eye View: Deaf Cultures in Global Perspective. In: Monaghan, Leila/Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet University Press, 1⫺24. Monaghan, Leila/Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.) 2003 Many Ways to be Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet University Press. Moores, Donald F. 2007 Educational Practices and Assessment. In: American Annals of the Deaf 151(5), 461⫺ 463. Moores, Donald F./Martin, David S. 2006 Overview: Curriculum and Instruction in General Education and in Education of Deaf Learners. In: Moores, Donald F./Martin, David S. (eds.), Deaf Learners: Developments in Curriculum and Instruction. Washington, DC: Gallaudet University Press, 3⫺13. Morales-López, Esperanza 2008 Sign Bilingualism in Spanish Deaf Education. In: Plaza-Pust, Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism: Language Development, Interaction, and Maintenance in Sign Language Contact Situations. Amsterdam: Benjamins, 223⫺276. Mugnier, Saskia 2006 Le Bilinguisme des Enfants Sourds: de Quelques Freins aux Possibles Moteurs. In: GLOTTOPOL Revue de Sociolinguistique en Ligne. [Retrieved 8 March 2006 from: http://www.univ-rouen.fr/dyalang/glottopol] Musselman, Carol 2000 How Do Children Who Can’t Hear Learn to Read an Alphabetic Script? A Review of the Literature on Reading and Deafness. In: Journal of Deaf Studies and Deaf Education 5(1), 9⫺31. Nakamura, Karen 2003 U-turns, Deaf Shock, and the Hard-of-hearing: Japanese Deaf Identities at the Borderlands. In: Monaghan, Leila/Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet University Press, 211⫺229. Odlin, Terence 2003 Cross-linguistic Influence. In: Doughty, Catherine J./Long, Michael H. (eds.), The Handbook of Second Language Acquisition. Oxford: Blackwell, 436⫺486. Odom, Samuel L./Hanson, Marci J./Lieber, Joan/Marquart, Jules/Sandall, Susan/Wolery, Ruth/ Horn, Eva/Schwartz, Ilene/Beckman, Paula/Hikido, Christine/Chambers, Jay 2001 The Costs of Pre-School Inclusion. In: Topics in Early Childhood Special Education 21, 46⫺55. Padden, Carol 1998 From the Cultural to the Bicultural: The Modern Deaf Community. In: Parasnis, Ila (ed.), Cultural and Language Diversity: Reflections on the Deaf Experience. Cambridge: Cambridge University Press, 79⫺98. Padden, Carol/Humphries, Tom 2005 Inside Deaf Culture. Cambridge, MA: Harvard University Press. Padden, Carol/Ramsey, Claire 1998 Reading Ability in Signing Deaf Children. In: Topics in Language Disorders 18, 30⫺46. Panayiotis, Angelides/Aravi, Christiana 2006 A Comparative Perspective of Deaf and Hard-of-hearing Individuals as Students at Mainstream and Special Schools. In: American Annals of the Deaf 151(5), 476⫺487.
977
978
VIII. Applied issues Petitto, Laura Ann/Katerelos, Marina/Levy, Bronna G./Gauna, Kristine/Tetreault, Karina/Ferraro, Vittoria 2001 Bilingual Signed and Spoken Language Acquisition from Birth: Implications for the Mechanisms Underlying Early Bilingual Language Acquisition. In: Journal of Child Language 28, 453⫺496. Plaza-Pust, Carolina 2004 The Path Toward Bilingualism: Problems and Perspectives with Regard to the Inclusion of Sign Language in Deaf Education. In: Van Herreweghe, Mieke/Vermeerbergen, Myriam (eds.), To the Lexicon and Beyond: Sociolinguistics in European Deaf Communities. Washington, DC: Gallaudet University Press, 141⫺170. Plaza-Pust, Carolina 2008 Why Variation Matters: On Language Contact in the Development of L2 Written German. In: Plaza-Pust, Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism: Language Development, Interaction, and Maintenance in Sign Language Contact Situations. Amsterdam: Benjamins, 73⫺135. Plaza-Pust, Carolina/Morales-López, Esperanza (eds.) 2008 Sign Bilingualism: Language Development, Interaction, and Maintenance in Sign Language Contact Situations. Amsterdam: Benjamins. Plaza-Pust, Carolina/Weinmeister, Knut 2008 Bilingual Acquisition of German Sign Language and Written Language: Developmental Asynchronies and Language Contact. In: Quadros, Ronice M. de (ed.), Sign Languages: Spinning and Unraveling the Past, Present, and Future. Forty-five Papers and Three Posters from the 9 th Theoretical Issues in Sign Language Research Conference, Florianopolis, Brazil, December 2006. Petrópolis (Brazil): Editora Arara Azul, 497⫺ 529. [Available from: www.editora-arara-azul.com.br/EstudosSurdos.php] Preisler, Gunilla 2007 The Psychosocial Development of Deaf Children with Cochlear Implants. In: Komesaroff, Linda (ed.), Surgical Consent: Bioethics and Cochlear Implantation. Washington, DC: Gallaudet University Press, 120⫺136. Ramsey, Claire/Padden, Carol 1998 Natives and Newcomers: Gaining Access to Literacy in a Classroom for Deaf Children. In: Anthropology & Education Quarterly 29(l), 5⫺24. Ravid, Dorit/Tolchinsky, Liliana 2002 Developing Linguistic Literacy: a Comprehensive Model. In: Journal of Child Language 29, 417⫺447. Reagan, Timothy 2001 Language Planning and Policy. In: Lucas, Ceil (ed.), The Sociolinguistics of Sign Languages. Cambridge: Cambridge University Press, 145⫺180. Romaine, Suzanne 1996 Bilingualism. In: Ritchie, William C./Bhatia, Tej K. (eds.), Handbook of Second Language Acquisition. San Diego, CA: Academic Press, 571⫺601. Saiegh-Haddad, Elinor 2005 Correlates of Reading Fluency in Arabic: Diglossic and Orthographic Factors. In: Reading and Writing 18(6), 559⫺582. Senghas, Richard J. 2003 New Ways to Be Deaf in Nicaragua: Changes in Language, Personhood, and Community. In: Monaghan, Leila/Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet University Press, 260⫺282. Singleton, Jenny L./Supalla, Samuel J. 2003 Assessing Children’s Proficiency in Natural Signed Languages. In: Marschark, Marc/ Spencer, Patricia (eds.), Oxford Handbook of Deaf Studies, Language, and Education. Oxford: Oxford University Press, 289⫺302.
39. Deaf education and bilingualism Svartholm, Kristina 2007 Cochlear Implanted Children in Sweden’s Bilingual Schools. In: Komesaroff, Linda (ed.), Surgical Consent: Bioethics and Cochlear Implantation. Washington, DC: Gallaudet University Press, 137⫺150. Swanwick, Ruth/Gregory, Susan 2007 Sign Bilingual Education: Policy and Practice. Coleford: Douglas McLean. Szagun, Gisela 2001 Language Acquisition in Young German-speaking Children with Cochlear Implants: Individual Differences and Implications for Conceptions of a ‘Sensitive Phase’. In: Audiology and Neurotology 6, 288⫺297. Tellings, Agnes 1995 The Two Hundred Years’ War in Deaf Education: A Reconstruction of the Methods Controversy. PhD Dissertation, University of Nijmegen. Tracy, Rosemarie/Gawlitzek-Maiwald, Ira 2000 Bilingualismus in der frühen Kindheit. In: Grimm, Hannelore (ed.), Enzyklopädie der Psychologie. Bd. IV: Sprachentwicklung. Göttingen: Hogrefe, 495⫺514. Vercaingne-Ménard, Astrid/Parisot, Anne-Marie/Dubuisson, Colette 2005 L’approche Bilingue à l’École Gadbois. Six Années d’Expérimentation. Bilan et Recommandations. Rapport Déposé au Ministère de l’Éducation du Québec. Université du Québec à Montréal. Wilbur, Ronnie B. 2000 The Use of ASL to Support the Development of English and Literacy. In: Journal of Deaf Studies and Deaf Education 5(1), 81⫺103. Woll, Bencie 2003 Modality, Universality and the Similarities Among Sign Languages: A Historical Perspective. In: Baker, Anne/Bogaerde, Beppie van den/Crasborn, Onno (eds.), Crosslinguistic Perspectives in Sign Language Research. Selected Papers from TISLR 2000. Hamburg: Signum, 17⫺27. Woll, Bencie/Ladd, Paddy 2003 Deaf Communities. In: Marschark, Marc/Spencer, Patricia E. (eds.), Oxford Handbook of Deaf Studies, Language, and Education. Oxford: Oxford University Press, 151⫺163. Woodward, James 2003 Sign Languages and Deaf Identities in Thailand and Viet Nam. In: Monaghan, Leila/ Schmaling, Constanze/Nakamura, Karen/Turner, Graham (eds.), Many Ways to Be Deaf: International Variation in Deaf Communities. Washington, DC: Gallaudet University Press, 283⫺301. Yang, Jun Hui 2008 Sign Language and Oral/Written Language in Deaf Education in China. In: Plaza-Pust, Carolina/Morales-López, Esperanza (eds.), Sign Bilingualism: Language Development, Interaction, and Maintenance in Sign Language Contact Situations. Amsterdam: Benjamins, 297⫺331.
Carolina Plaza-Pust, Frankfurt am Main (Germany)
979
980
VIII. Applied issues
40. Interpreting 1. 2. 3. 4. 5. 6. 7.
Introduction Signing communities and language brokering A history of sign language interpreting Research into sign language interpreting International Sign interpreting Conclusions Literature
Abstract This chapter explores the emerging evidence of the history of interpreting and sign language interpreting across the world. Topics to be addressed include signing communities known to have existed in the last 400 years and the roles adopted by bilingual members of those communities. The emergence of the profession of sign language interpreters (Deaf and non-Deaf, deaf and hearing) around the world will be discussed, with a more detailed analysis of the evolution of the profession within the UK. The chapter then addresses interpreter bilingualism and the growth of evidence-based research into sign language interpreting. The chapter concludes with a discussion of interpreting into International Sign.
1. Introduction This chapter gives an overview of the history and evolution of the sign language interpreting profession in parallel with the evolution of sign languages within Deaf communities. Trends in interpreting research will be reviewed and ‘International Sign’ interpreting will be discussed. Often when two communities are in contact, there is effort by community members to try to learn the other community’s language, both for direct communication and to help other parties communicate. The relationship between the communities, their economic value, and their status influence these interactions. This includes interpretermediated interaction, when a bilingual individual facilitates communication between two parties. Historically, interpreters and translators have been used to facilitate communication and trade between groups who do not speak or write the same language. They have also been used to oppress, manipulate, and control minority cultures and languages. The oldest recorded use of an interpreter (although often called a translator) was in 2500 BC in ancient Egypt under King Neferirka-Re. Here the interpreters were used in trade and to ensure that the ‘barbarians’, that is, those who did not speak Egyptian, obeyed the king (Hermann 1956). Similar power dynamics can be seen within societies today for both spoken language (Bassnett/Trivedi 1999) and sign language (Ladd 2003) interpreting; those who speak (or sign) a world language, such as English or American
40. Interpreting Sign Language (ASL), or the dominant language of a country, can and do exercise power through interpreters whether consciously or not. Throughout history and across the world, where sufficient numbers of Deaf people form a community, sign languages have come into existence. Hearing and deaf members of those societies who are able to sign have been called upon to act as translators and interpreters for Deaf people in order to allow them to interact with the non-signing mainstream and those who come from communities using a different sign language (for an example, see Carty/Macready/Sayers (2009, 308⫺313)). Whilst the records for sign language interpreters do not extend as far back as those for spoken languages, there is documentary evidence recording the development of the sign language interpreting profession and the involvement of culturally Deaf people (i.e. members of Deaf communities, whether deaf or hearing, who use sign language) and non-Deaf people (i.e. those people who are not members of Deaf communities, whether deaf or hearing), thus providing access to information and civil society for Deaf communities.
2. Signing communities and language brokering There is evidence that in some communities, both past and present, with a relatively high incidence of audiological deafness (from genetic or other causes), many of the non-Deaf population know some signs, even if they do not use a full sign language. Examples of such communities include Martha’s Vineyard in the US from the 17th to the early 20th centuries (Groce 1985) and, in the 21st century, Desa Kolok in Bali (Marsaja 2008), Adamorobe in Ghana (Nyst 2007), Mardin in Turkey (Dikyuva 2008), and a Yucatec Maya village in Mexico (Johnson 1991; Fox Tree 2009) (for an overview, see Ragir (2002); also see chapter 24, Shared Sign Languages, for further discussion). The Ottoman Court (1500⫺1700) provides an example of an institutionalized context for the creation of an (uncommonly) high status signing community. According to Miles (2000), a significant number of mutes (sic) were brought together at the court and used signs and ‘head actions’ (presumably a manifestation of prosody, see Sandler (1999)). This community differed from other signing communities in that it had institutional status, created by the ruling Sultan, and hence had associated high status, to the extent that deaf-mutes were sought throughout the Ottoman Empire to join the court (Miles 2004). In this context, those fluent in the sign language of the court were engaged as ‘translators’. These often functioned in high status contexts, for example with “[t]he Dutch ambassador Cornelis Haga, who reached Constantinople around 1612, [who] went so far as to invite the court mutes to a banquet and, with a sign translator’s help, was impressed by their eloquence on many topics” (Deusingen 1660; transl. Sibscota 1670, 42f; cited in Miles 2000, 123). Those deaf-mutes had some form of access to education and had specific roles to fulfil within the court. Latterly, deaf education has been a similar institutional driving force for sign language transmission (see chapter 39). For the purpose of education, deaf children are brought together, often in residential schools. These institutions form the beginnings of large Deaf communities, regional and national sign languages (Quinn 2010), as well as language brokering, translation, and interpreting provided within these institutions by their members (Adam/Carty/Stone 2011; Stone/Woll 2008).
981
982
VIII. Applied issues The non-institutionalized examples of sign languages have been called, in more recent years, ‘rural’ or ‘village’ sign languages, with their counterparts of institutional origin being referred to as ‘urban’ sign languages (Jepson 1991). Village sign language communities have rarely had their languages documented (for some of those that have, see the references provided above); also, the interaction between deaf people and hearing people who do not sign is not well documented (cf. Miles 2004), nor is the extent to which interpreting occurs within the community, and between the community and outsiders. Village sign language contexts would provide an interesting insight into the power dynamics and language brokering (Morales/Hanson 2005) that occurs in situations where a spoken language and a sign language have more or less equal status (within the community) and differing status (outside of the community). Groce (1985) describes her interviewees from Martha’s Vineyard as being unable to remember whether certain members of the community were deaf or hearing, and yet, if conversation was taking place in spoken English and a deaf person arrived, someone would interpret (see also Woll/Ladd (2010, 166 f.) for a useful description of several village sign communities). There clearly was an expectation for hearing members of the society to interpret, although it is not clear whether a small set of individuals were relied upon to interpret, and whether community members expressed any preference for specific individuals to interpret. Often preferences expressed by community members make manifest the community’s expectation of interpreter and translation behaviour, which can be at odds with mainstream expectations (see Stone (2009) for more extensive treatment of a Deaf translation norm). Urban sign languages tend to originate in institutions, often educational or religious, where Deaf people are brought together, many of whom will enter such institutions knowing only a home sign system (Goldin-Meadow 2003; see chapter 26 for discussion). More rarely, those with parents who are sign language users may already have acquired a sign language. Within these contexts, the workers in the institutions (mostly clerics and teachers) have some degree of fluency in the sign language of that community. Language brokering often occurs between the workers and the deaf people. Frequently, interpreting is undertaken initially by members of the Deaf community who can hear (Corfmat 1990) and then by members of the mainstream community who learned to sign (Simpson 2007). This route to professionalization (that is, the training of language brokers and then of naïve bilinguals) is a common one for many languages, both spoken and signed (Mikkelson 1996). Initially, bilingual friends and family interpret, then people within wider community networks including qualified professionals (religious workers, teachers, welfare professionals). Subsequently, training is formally established and the role of a professional interpreter is separated from other roles (Simpson 2007).
3. A history of sign language interpreting As mentioned above, the earliest record of a ‘sign translator’ is from 1612, in an account written in 1660, and describes an interpreter working at an international political level (for a Dutch visitor to the Ottoman court). It is not known who this translator was, but it is likely that he was a hearing member of the court who had become fluent
40. Interpreting in the sign language of the deaf-mutes brought to the Ottoman court (Miles 2000). The first record of a Deaf person undertaking institutional language brokering involves Matthew Pratt, husband to Sarah Pratt, both of whom were deaf and who used sign language as their principal means of communication (Carty/Macready/Sayers 2009). Sarah Pratt (1640⫺1729) underwent an interview to be accepted as a member of the Christian fellowship in the Puritan church of Weymouth, Massachusetts, in colonial New England. During this interview, both of Sarah’s hearing sisters interpreted for her, and Matthew Pratt wrote a transcript from sign language to written English. It thus appears that one of the earliest records documents Deaf and hearing interpreters/ translators working alongside each other. Samuel Pepys’ diary entry for 9 November 1666 describes his colleague, Sir George Downing, acting as an interpreter; Pepys asks Downing to tell a deaf boy, that I was afeard that my coach would be gone, and that he should go down and steal one of the seats out of the coach and keep it, and that would make the coachman to stay. He did this, so that the dumb boy did go down, and, like a cunning rogue, went into the coach, pretending to sleep; and, by and by, fell to his work, but finds the seats nailed to the coach. So he did all he could, but could not do it; however, stayed there, and stayed the coach till the coachman’s patience was quite spent, and beat the dumb boy by force, and so went away. So the dumb boy come up and told him all the story, which they below did see all that passed, and knew it to be true. (Pepys 1666, entry for 9 November 1666)
Here the deaf servant’s master acts as an interpreter. This type of summarising interpreting is also reported in other records when non-signers wish to understand what a deaf signer is saying. The oldest mention of sign language interpreter provision in court appears in London’s Old Bailey Criminal Court Records for 1771 (Hitchcock/Shoemaker 2008). The transcripts of the proceedings for this year mention that a person, whose name is not given, “with whom he [the defendant] had formerly lived as a servant was sworn interpreter”. The transcript goes on to state that this interpreter “explained to him the nature of his indictment by signs”. This is the first documented example of a person serving in the capacity of sign language interpreter in Britain, although there is little evidence to suggest that British Sign Language (BSL) was used rather than a home sign system (Stone/Woll 2008). Deaf schools were only just being established at that time (Lee 2004) and it is not known if there were communities of sign language users of which the defendant could have been a part. The first mention of a Deaf person functioning as a court interpreter occurs not long after, in 1817, in Scotland. This Deaf assistant worked alongside the headmaster of the Edinburgh school for the Deaf, Mister Kinniburgh (Hay 2008). The deaf defendant, Jean Campbell, an unschooled deaf woman in Glasgow, was charged with throwing her infant off a bridge. As Glasgow had no deaf school, Kinniburgh, principal of the Edinburgh school, was called to interpret. He communicated by “making a figure with his handkerchief across his left arm in imitation of a child lying there, and having afterwards made a sign to her as if throwing the child over the bar [...] she made a sign and the witness said for her ‘not guilty, my lord’” (Caledonian Mercury 1817). The unnamed Deaf person working as an interpreter assisted the communication by ensur-
983
984
VIII. Applied issues ing that the deaf woman understood the BSL used by Kinniburgh and that Kinniburgh understood her. This role is still undertaken by Deaf people: deaf people isolated from the community, with limited schooling, or with late exposure to a sign language (some of whom are described as semi-lingual (Skutnabb-Kangas 1981)) may sign in a highly idiosyncratic way, using visually motivated gestures. In these instances, Deaf interpreters can facilitate communication (Bahan 1989, 2008; Boudreault 2005). Despite their service over many years, in many countries Deaf interpreters have not yet undergone professionalization and still work without training or qualification. This situation has begun to change in recent years as a result of better educational opportunities with formal qualifications becoming available to Deaf interpreters, or parallel qualifications being developed. Hearing interpreters underwent professionalization well before the professional recognition of their Deaf peers. Few professional interpreters are from the ‘core’ of the Deaf community, that is, deaf people from Deaf families. It is often regarded as not feasible for a Deaf person to work as a translator and/or interpreter (T/I). In most situations, a T/I is required to interpret from a spoken language into the national sign language and vice versa; as deaf people are not able to hear the spoken language, they are usually not identified by the mainstream as interpreters. This contrasts with most minority language T/Is, who come from those communities rather than being outsiders (Alexander/Edwards/ Temple 2004). There are now, however, possibilities for Deaf interpreters and translators to be trained and accredited in Australia, Canada, France, South Africa, the UK, and the US, with an increasing role for Deaf interpreters at national, transnational (e.g. the European Forum for Sign Language Interpreters ⫺ EFSLI), and international levels (e.g. the World Association of Sign Language Interpreters ⫺ WASLI) conferences.
3.1. Ghost writers and Deaf language brokering As well as limited recognition of Deaf people as interpreters, until recently, there has been little exploration of the role of bilingual deaf people as interpreters and translators both inside the community and between the community and the mainstream. As pointed out previously, the first recorded mention of a deaf translator appears in the mid-17th century and of a deaf interpreter in 1817 (Carty/Macready/Sayers 2009; Hay 2008). This suggests that bilingual Deaf people have been supporting Deaf people in understanding the world around them, in their interactions within the community, and with the wider world, for as long as deaf people have come together to form language communities. Just as interpreters working for governments need to render the accounts of refugees and asylum seekers into ‘authentic’ accounts for institutions (Inghilleri 2003), Deaf communities also desire ‘authentic’ accounts of the world and institutions that are easily understandable to them. Deaf people who undertake language brokering, translation, and interpreting are able to provide the community with these authentic accounts. Since the inception of Deaf clubs, bilingual deaf people have supported the community by translating letters, newspapers, and information generally to semi-literate and monolingual Deaf people. This is still found today (Stone 2009) and is considered by
40. Interpreting these translators as part of their responsibility to the community, an example of the reciprocal sharing of skills within the community’s collectivist culture (Smith 1996). Much of this language brokering is hidden and starts at an early age. Socialization as described by Bourdieu (1986) does not happen within the family for deaf children born to non-Deaf parents. Rather, socialization occurs with deaf and Deaf children at school and Deaf adults inside and outside of school. It is this process, or Deafhood (Ladd 2003), that brings about the identity change from deaf to Deaf, and we often find a sharing of skills, including bilingual skills, within these communities, understood as being part of Deaf identity. There are accounts of translation and interpreting within residential schools, not only in the classroom where students support each other (Boudreault 2005, 324), but also in situations where students support each other by helping with correspondence to parents and family (Adam/Carty/Stone 2011). With the introduction of oral education practices, Deaf pupils with Deaf parents have even interpreted for other parents (Thomas 2008). These language brokers are called ‘ghost writers’ in the Australian Deaf community (Adam/Carty/Stone 2011). This type of activity has also been reported in the US (Bienvenu 1991, cited in Forestal 2005), in the UK (Stone/Adam 2008), and in Ireland and Argentina (Adam/Dunne/Druetta 2008). Of note is the irrelevance of the socioeconomic status of the bilingual Deaf person within the Deaf community. Being a language professional is often considered a high status job (when working with a high status ‘developed world’ language) in the mainstream. This, however, is not the case in the Deaf community where language skills are freely shared along with other skills. This reciprocity is typical within urban sign language communities where, for example, a Deaf builder with English literacy will freely give his building and literacy skills to the community as part of community skills sharing or reciprocity (Stone/Adam 2008).
3.2. Bilingualism and professional interpreting One central issue in interpreting is the level and type of bilingualism necessary to perform as an interpreter, and indeed it is one of the reasons for the development of Deaf interpreters. Grosjean (1997) observes that bilinguals may have restricted domains of language use in one or both of their languages, since the environments in which the languages are used are often complementary. He also discusses the skill set of the bilingual individual before training as a translator or interpreter, and notes that few bilinguals are entirely bicultural. These factors impact the bilingual person’s language, for instance, lack of vocabulary and/or restricted access to stylistic varieties in one or more of their languages. Interpreter training must address these gaps since interpreters, unlike most bilinguals, must use skills in both their languages for similar purposes in similar domains of life, with similar people (Grosjean 1997). Interpreters have to reflect upon their language use and ensure that they have language skills in both languages sufficient for the areas within which they work. Additionally, because the Deaf community is a bilingual community and Deaf people have often had exposure to the language brokering of deaf bilinguals from an early age (Adam/Carty/Stone 2011), a Deaf translation norm (Stone 2009) may exist. The training of sign language interpreters not only needs to develop translation equivalents, but
985
986
VIII. Applied issues also needs to sensitize interpreters to a Deaf translation norm, should the Deaf community they will be working within have one. Hearing people with Deaf parents, sometimes known as Children of Deaf Adults (CODAs), inhabit and are encultured within both the Deaf community and the wider community (Bishop/Hicks 2008). Hearing native signers, who may be said to be ‘Deaf (hearing)’ (Stone 2009), often act informally as T/Is for family and friends from an early age (Preston (1996); cf. first generation immigrants or children of minority communities (Hall 2004)). Their role and identity are different from Deaf (deaf) interpreters who may undertake similar activities, but within institutions such as deaf schools that the Deaf (hearing) signers do not attend. Hearing native signers may therefore not have exposure to a Deaf translation norm that emerges within Deaf (deaf) spaces and places; exposure to this norm may form part of the community’s selection process when choosing a hearing member of the Deaf community as an interpreter (Stone 2009). A common complaint from Deaf people is that many sign language interpreters are not fluent enough in sign language (Alanwi 2006; Deysel/Kotze/Katshwa 2006; Allsop/ Stone 2007). This may be true both of hearing people with Deaf parents (cf. van den Bogaerde/Baker 2008) and of those who came into contact with the community at a later age. Learners of sign languages often struggle with language fluency (QuintoPozos 2005) and acculturation (Cokely 2005), in contrast to many spoken language interpreters who only interpret into their first language (Napier/Rohan/Slatyer 2005). Grosjean (1997) discusses language characteristics of ‘interpreter bilinguals’ in spoken languages and the types of linguistic features seen when they are working as T/Is, such as: (i) loan translations, where the morphemes in the borrowed word are translated item by item (Crystal 1997); (ii) nonce borrowings, where a source language term is naturalised by adapting it to the morphological and phonological rules of the target language; and (iii) code-switching (producing a word in the source rather than the target language). Parallels can be seen in sign language interpreting, for example, if mouthing is used to carry meaning or where fingerspelling of a source word is used in place of the target sign (Napier 2002; Steiner 1998; for discussion of mouthing and fingerspelling, see chapter 35). With many sign language interpreters being late learners of the sign language, such features of interpreted language may occur frequently. There is a great deal of current interest in cross-modal bilingualism in sign language and spoken language. Recent research includes explorations of the interpreting between two modalities (Padden 2000) as well as code-blending in spontaneous interaction (Emmorey et al. 2008) and when interpreting (Metzger/de Quadros 2011). Although the grammars of sign languages differ from those of spoken languages, it is possible to co-articulate spoken words and manual units of sign languages. This results in a contact form unique to cross-modal bilingualism. Non-native signers ⫺ both deaf and hearing ⫺ may not utilize a fully grammatical sign language, but instead insert signs into the syntactic structure of their spoken language. Such individuals may prefer interpreters to use a bimodal contact form of signing. This has been described in the literature as ‘transliteration’ (Siple 1998) or sign-supported language (e.g. SignSupported English), and some countries offer examinations and certification in this contact language form, despite the lack of an agreed way of producing this bilingual blend (see Malcolm (2005) for a description of this form of language and its use when interpreting in Canada).
40. Interpreting
3.3. Interpreter training The professionalization of sign language interpreting has been similar in most countries in the Western world: initially, those acting as interpreters would have come from the community or would have been closely associated with Deaf people (Corfmat 1990). Early interpreters came from the ranks of educators or church workers (Scott-Gibson 1991), with a gradual professionalization of interpreting, especially in institutional contexts such as the criminal justice system. Training and qualifications were introduced, with qualification certificates awarded by a variety of bodies in different countries, including Deaf associations, local government, national government, specialist awarding bodies, or latterly interpreter associations. As an example, within the UK, and under the direction of the Deaf Welfare Examination Board (DWEB), the church “supervised in-service training and examined candidates in sign language interpreting as part of the Board’s Certificate and Diploma examinations for missioner/welfare officers to the deaf” (Simpson 1991, 217); the list of successful candidates functioned as a register of interpreters from 1928 onwards. Welfare officers worked for the church and interpreting was one of their many duties. This training required the trainee missioner/welfare officers to spend much of their time in the company of, and interpreting for, deaf people. This socializing with deaf people occurred within Deaf Societies (church-based social service structures established prior to government regulated social services) and other Deaf spaces, with trainees learning the language by mixing with Deaf people and supporting communication and other needs of those attending their churches and social clubs. From the 1960s onwards, in many countries there were moves to ensure state provision of social welfare, and during this time, specialist social workers for the deaf were often also trained in sign language and functioned as interpreters. At different points in this transitional period, in Western countries Deaf people lobbied their national administrations for interpreting to be recognised and paid for as a discrete profession to ensure the autonomy of Deaf people and the independence of the interpreter within institutional contexts. Within private settings, it is still often family and friends who interpret, as funds are only supplied for statutory matters. There are differences in provision in some countries, for instance, Finland (Services and Assistance for the Disabled Act 380/87), where Deaf people are entitled to a specified number of hours per year and are free to use these hours as they choose. With the professionalization of sign language interpreting, national (e.g. RID in the USA), transnational (e.g. EFSLI), and global interpreting associations have been established. The World Association of Sign Language Interpreters (WASLI) was established in 2005. In January 2006, WASLI has signed a joint agreement with the World Federation of the Deaf (WFD) to ensure that interpreter associations and Deaf associations work together in areas of mutual interest. It also gives primacy to Deaf associations and Deaf communities in the documentation, teaching, and development of policies and legislation for sign languages (WASLI 2006). WASLI’s conferences enable accounts of interpreting in different countries to emerge. Takagi (2005) reports on the need in Japan for interpreters who are able to translate/interpret to and from spoken English because of its importance as a lingua franca in the global Deaf community. Sign language interpreter training may need to change to ensure that applicants have knowledge of several spoken and signed lan-
987
988
VIII. Applied issues guages before beginning interpreter training, rather than just knowledge of the one spoken language and one sign language. The need for sign language interpreters to work to and from a third language is seen in New Zealand, where Māori Deaf people need interpreters who are fluent in Te Reo Māori, the language of the Māori community (Napier/Locker McKee/Goswell 2006) as well as in New Zealand Sign Language. In Finland, interpreters need to pass qualifications in Finnish, Swedish, and English as well as Finnish Sign Language. In other regions, such as Africa, multilingualism is part of everyday life and interpreters are required to be multilingual. Napier (2005) undertakes a thorough review of current programmes for training and accreditation in the UK, US, and Australia. There are many similarities among these three countries, with all having standards for language competence and interpreting competence to drive the formal assessment of interpreters seeking to gain full professional status. Other countries within Europe have similar structures (Stone 2008) including Estonia, Finland, and Sweden, where all interpreter training occurs in tertiary education settings. In contrast, in some countries, interpreters may only receive a few days or weeks of training, or undertake training outside their home country (Alawni 2006). Although most training programmes start with people who are already fluent in the local or national sign language before undertaking the training, as interpreters become more professionalized, training often moves into training institutions, where language instruction and interpreter training often form part of the same training programme (Napier 2005). In many countries, two levels ⫺ associate and full professional status ⫺ are available to members of the interpreting profession. These levels of qualification are differentiated in professional associations and registering bodies. Moving to full professional status often requires work experience and passing a qualifying assessment as well as initial training. With the inclusion of sign language in five of the articles from the UN Convention on Rights for People with Disabilities (CRPD) (Clause 9.2 (e) explicitly states the need for professional sign language interpreter provision), there is every expectation that sign language interpreting will be further professionalized (see Stone (in press) for an analysis of the impact of the CRPD on service provision in the UK).
4. Research into sign language interpreting Research into sign language interpreting began in the 1980s. Applied studies have included research on interpreting in different settings such as conference, television, community, and educational interpreting, and surveys of training and provision; other studies have explored underlying psycholinguistic and sociolinguistic issues. In recent years, improved video technology has allowed for more fine-grained analyses of interpreting, of the decisions made by the interpreter when rendering one language to another, and of the target language as a linguistic product. Early resources on the practice of sign language interpreting (Solow 1981) and on different models of practice (McIntire 1986) were principally based on interpreters reflecting on their own practice. Surveys have explored training and number of interpreters working in different countries across Europe (Woll 1988). These were followed in the 1990s by histories of the development of professional interpreters (Moorhead
40. Interpreting 1991; Scott-Gibson 1991). A number of textbook resources are available, which include an overview of the interpreting profession in individual countries (such as Ozolins/ Bridge (1999), Napier/Locker McKee/Goswell (2006), and Napier (2009) for Australia and New Zealand). Most recently, studies on the quality of sign language interpreters on television in China have been published in English (Xiaoyan/Ruiling 2009). One of the first empirical studies of the underlying psychological mechanisms in interpreting and the sociolinguistics of language choice was Llewellyn-Jones (1981). This study examined the work of those working as interpreters with two research questions: What is the best form of training for interpreters? How can we realistically assess interpreting skills? The study then discusses current processing models of interpreting for both sign language and spoken language and explores effectiveness of information transfer, time lag between source language output and start of production of the target language, and the appropriateness of the choice of variety of target language. Many of the themes addressed by Llewellyn-Jones (1981) are still being explored; the process of interpreting is not well understood and it is only in recent years that modern psycholinguistic experimental techniques have been applied to interpreting. The 1990s saw much more empirical research into interpreting (between both spoken languages and sign languages). These empirical studies provide us with further insight into the process of interpreting (Cokely 1992a) and the interpreting product (Cokely 1992b), using psychological and psycholinguistic methodologies to understand interpreting (spoken and signed) (Green et al. 1990; Moser-Mercer/Lambert 1994). This has led in recent years to an examination of the underpinning cognitive and linguistic skills needed for interpreter training and interpreting (López Gómez et al. 2007). Yet, there is still no clear understanding of how interpreters work; many of the models developed have not been tested empirically. It is expected that modern day techniques, both behavioural and neuroscience (Price/Green/von Studnitz 1999), will in time provide further understanding of the underlying networks involved in interpreting and the time course of language processing for interpreting. Much research has been undertaken from a sociolinguistic perspective, analysing interpreting not only in terms of target language choice (as in Llewellyn-Jones 1981), but also examining the triadic nature of interpreter-mediated communication, which influences both spoken language (Wadensjö 1998) and sign language interpreting (Roy 1999). The recognition of the effect of an interpreter’s presence has enabled the examination of interpreting as a discourse process, rather than categorising interpreters as invisible agents within interpreter-mediated interaction. This approach has led to greater exploration of the mediation role of the interpreter as a bilingual-bicultural unratified conversational partner. Metzger (1999) has described the contributions interpreters make within interactions and the agency of interpreters in relation to different participants within interpreter-mediated events. The series of Critical Link conferences for interpreters working in the community has also provided a forum for interpreters (of both sign languages and spoken languages) to share insights and research methodologies. Most recently, Dickinson and Turner (2008) have explored interpreting for Deaf people within the work place, providing useful insights into the relationship of interpreters with the deaf people and how they position themselves within interpretermediated activity. Sign language interpreting research has also started to look at interpreting within specific domains. In the legal domain, there is research on access for Deaf people in
989
990
VIII. Applied issues general. The large-scale study by Brennan and Brown (1997), which included court observations and interviews with interpreters, explored Deaf people’s access to justice via interpreters and the types of interpreters who undertake work in courts in the UK. Because of restrictions on recording courtroom proceedings, Russell’s study (2002) in Canada examined the mode of interpreting (consecutive vs. simultaneous) in relation to accuracy, thus enabling a fine-grained analysis of the interpreted language in a moot court. The use of court personnel and the design of the study enabled nearly ‘real’ courtroom interpreting. Russell found that the consecutive mode allowed interpreters to achieve a greater level of accuracy and provide more appropriate target language syntax. The extensive use of interpreters within educational settings has also led to studies about the active role of the recipient of interpreting (Marschark et al. 2004). With regard to accuracy of translation and linguistic decision-making processes, Napier (2002) examined omissions made by interpreters in Australian Sign Language (Auslan) as a target language vis-à-vis the source language (English) when interpreting in tertiary educational settings. Napier explores naturalistic language use and strategic omissions used by interpreters to manage the interpretation process and the information content. This study applies earlier work looking at the influence of interpreters within the community into conference-type interpreting. She also addresses issues of cognitive overload when omissions are made unconsciously rather than strategically. Other more recent examinations of interpreters’ target language include studies of prosody (Nicodemus 2009; Stone 2009). Nicodemus examines the use of boundary markers by hearing interpreters at points where Deaf informants agreed boundaries occur, demonstrating that interpreters are systematic in their use of boundary markers. Stone compares the marking of prosody by deaf and hearing interpreters, finding that although hearing interpreters mark clausal level units, deaf interpreters are able to generate nested clusters where both clausal and discourse units are marked and interrelated. Further research looking at the development of fluency in trainee interpreters, transnational corpora of sign language interpretation, and the interpreting process itself will provide greater insights into the differences between interpreted and naturalistic language. Technological developments, including improved video frame speeds, high definition, technologies for time-based video annotation, and motion analysis, should provide improved approaches to coding and analysing interpreting data. With the miniaturization of technology such techniques may be used in ‘real’ interpreting situations as well as lab-based interpreting data collection.
5. International Sign interpreting Where Deaf people from different countries meet, spontaneously developed contact forms of signing have traditionally been used for cross-linguistic interaction. This form of communication, often called International Sign (IS) draws on signers’ access to iconicity and to common syntactic features in sign languages that make use of visualspatial representations (Allsop/Woll/Brauti 1995). With a large number of Deaf people in many countries having ASL as a first or second sign language, it is increasingly
40. Interpreting common for ASL to serve as a lingua franca in such settings. However, ASL uses fingerspelling more extensively than many other sign languages, a fact which reduces ASL’s appeal to many Deaf people with limited knowledge of English or the Roman alphabet. It is possible that an ‘international’ ASL may evolve and that this will be used as a lingua franca at international events (also see Chapter 35, Language Contact and Borrowing). In the absence of a genuine international language, the informal use of IS has been extended to formal organisational contexts (WFD, Deaflympics, etc.). In these contexts, a formally or informally agreed-upon international sign lexicon for terminology relating to meetings (e.g. ‘regional secretariat’, ‘ordinary member’) is used. This lexicon developed from the WFD’s initial attempts in the 1970s to create a sign ‘Esperanto’ lexicon, called Gestuno, by selecting “naturally spontaneous and easy signs in common use by deaf people of different countries” (BDA 1975, 2). Besides IS serving as a direct form of communication between users of different sign languages, IS interpretation (both into and from IS) is now also increasingly provided. IS is useful in providing limited access via interpretation where interpretation into and out of specific sign languages is not available. There have been few studies into IS interpreting and publications on this topic have followed the general trend of sign language interpreting literature, with personal reflection and introspection leading the way (Moody 1994; Scott-Gibson/Ojala 1994), followed by later empirical studies. Locker McKee and Napier (2002) analysed video-recordings of interpretation from English into IS in terms of content. They identified difficulties in annotating IS interpretation since, as a situational pidgin, IS has no fixed lexicon. The authors then focus on typical linguistic structures used by interpreters and infer the strategies employed by the interpreters. As expected, the pace of signing is slower than for interpretation into a sign language. The authors also report that the size of sign space is larger than for interpretation into a national sign language, although it is unclear how this comparison was made. Mouthings and mouth gestures are mentioned, with IS interpretations having fewer mouthings but making enhanced use of mouth gestures for adverbials and other non-manual markers for emphasising clause and utterance boundaries. With an increasing number of sign language corpora of various types, in the future these comparisons can be made in a more detailed manner. The IS interpreters use strategies common to all interpretation, such as maintaining a long lag-time to ensure maximum understanding and thus a maximally relevant interpretation. Use of contrasting locations in space facilitates differentiation and serves as a strategy alongside slower language production to enhance comprehension. Abstract concepts are made more concrete, with extensive use of hyponyms to allow the audience to retrieve the speaker’s intent. Similarly, role-shift and constructed action are used extensively to assist the audience to draw upon experience to infer meaning from the IS interpretation. Metaphoric use of space also contributes to ease of inference on the part of the audience. Context-specific information relevant to the environment further enables the audience to infer and recover meaning. Rosenstock (2008) provides a useful analysis of an international Deaf event (DeafWay II) and the IS interpreting used there. She describes the conflicting demands on the IS interpreters: “At times, as in the omissions or the use of tokens, the economic considerations clearly override the need for iconicity. In other contexts, such as lexical choices or explanations of basic terms, the repetitions or expansions suggest a heavier
991
992
VIII. Applied issues reliance on an iconic motivation” (Rosenstock 2008, 154). This elegantly captures many of the competing factors interpreters manage when providing IS interpreting. Further studies are clearly needed, with more information on who acts as an IS interpreter, what linguistic background IS interpreters have, and possible influences of language and cultural background on IS interpreting. There are also as yet no studies of how interpreters (Deaf and hearing) work from IS into other languages, both signed and spoken. The fact which is of most applied interest is that IS interpreting can be successful as a means of communication, depending on the experience of the users of the interpreting services. This provides a unique window into inference and pragmatic language use where the conversational partner is required to make at least as much effort in understanding as the signer/speaker makes in producing understandable communication. This will illuminate how we understand language at a discourse level and how interpreters can work at a meaning-driven level of processing.
6. Conclusions This chapter has sketched the development of the sign language interpreting profession and the changes within sign language interpreting and Deaf communities. Research into sign language interpreting began even more recently than research on sign language linguistics. Much of the ground-work has now been laid and the field can look forward to an increasing number of studies that will provide data-driven evidence from an increasing number of sign languages. This will in turn provide a broader understanding of the cognitive and linguistic mechanisms underlying the interpreting process and its associated products.
7. Literature Adam, Robert/Carty, Breda/Stone, Christopher 2011 Ghostwriting: Deaf Translators Within the Deaf Community. In: Babel 57(3), 1⫺19. Adam, Robert/Dunne, Senan/Druetta, Juan Carlos 2008 Where Have Deaf Interpreters Come from and Where Are We Going? Paper Presented at the Association of Sign Language Interpreters (ASLI) Conference, London. Alawni, Khalil 2006 Sign Language Interpreting in Palestine. In: Locker McKee, Rachel (ed.), Proceedings of the Inaugural Conference of the World Association of Sign Language Interpreters. Coleford: Douglas McLean Publishing, 68⫺78. Alexander, Claire/Edwards, Rosalind/Temple, Bogusia 2004 Access to Services with Interpreters: User Views. York: Joseph Rowntree Foundation. Allsop, Lorna/Stone, Christopher 2007 Collective Notions of Quality. Paper Presented at Critical Link 5, Sydney, Australia. Allsop, Lorna/Woll, Bencie/Brauti, Jon-Martin 1995 International Sign: The Creation of an International Deaf Community and Sign Language. In: Bos, Heleen/Schermer, Trude (eds.), Sign Language Research 1994. Hamburg: Signum, 171⫺188.
40. Interpreting Bahan, Ben 1989 Notes from a ‘Seeing’ Person. In: Wilcox, Sherman (ed.), American Deaf Culture. Silver Spring, MD: Linstok Press, 29⫺32. Bahan, Ben 2008 Upon the Formation of a Visual Variety of the Human Race. In: Bauman, Hans-Dieter (ed.), Open Your Eyes: Deaf Studies Talking. Minnesota: University of Minnesota Press, 83⫺99. Bassnett, Susan/Trived, Harish 1999 Post-Colonial Translation: Theory and Practice. London: Routledge. Bishop, Michelle/Hicks, Sherry (eds.) 2008 Hearing, Mother Father Deaf: Hearing People in Deaf Families. Washington, DC: Gallaudet University Press. Bogaerde, Beppie van den/Baker, Anne E. 2009 Bimodal Language Acquisition in KODAs. In: Bishop, Michelle/Hicks, Sherry (eds.), Hearing, Mother Father Deaf: Hearing People in Deaf Families. Washington, DC: Gallaudet University Press, 99⫺132. Boudreault, Patrick 2005 Deaf Interpreters. In: Janzen, Terry (ed.), Topics in Signed Language Interpreting. Amsterdam: Benjamins, 323⫺356. Bourdieu, Pierre 1986 The Forms of Capital. In: Richardson, John G. (ed.), Handbook for Theory and Research for the Sociology of Education. New York: Greenwood Press, 241⫺258. Brennan, Mary/Brown, Richard 1997 Equality Before the Law: Deaf People’s Access to Justice. Coleford: Douglas McLean. Caledonian Mercury (Edinburgh, Scotland), Thursday 3 July, 1817, Issue 14916. Cokely, Dennis 1992a Interpretation: A Sociolinguistic Model. Burtonsville, MD: Linstok Press. Cokely, Dennis (ed.) 1992b Sign Language Interpreters and Interpreting. Burtonsville, MD: Linstok Press. Cokely, Dennis 2005 Shifting Positionality: A Critical Examination of the Turning Point in the Relationship of Interpreters and the Deaf Community. In: Marschark, Marc/Peterson, Rico/Winston Elizabeth (eds.), Sign Language Interpreting and Interpreter Education: Directions for Research and Practice. Oxford: Oxford University Press, 3⫺28. Corfmat, Percy 1990 Please Sign Here: Insights Into the World of the Deaf. Vol. 5. Worthing and Folkestone: Churchman Publishing. Crystal, David 1997 The Cambridge Encyclopaedia of Language. Cambridge: Cambridge University Press. Deysel, Francois/Kotze, Thelma/Katshwa, Asanda 2006 Can the Swedish Agency Model Be Applied to South African Sign Language Interpreters? In: Locker McKee, Rachel (ed.), Proceedings of the Inaugural Conference of the World Association of Sign Language Interpreters. Coleford: Douglas McLean, 60⫺67. Dickinson, Jules/Turner, Graham 2008 Sign Language Interpreters and Role Conflict in the Workplace. In: Valero-Garcés, Carmen/Martin, Anne (eds.), Crossing Borders in Community Interpreting: Definitions and Dilemmas. Amsterdam: Benjamins, 231⫺244. Dikyuva, Hasan 2008 Mardin Sign Language. Paper Presented at the CLSLR3 Conference, Preston, UK. Emmorey, Karen/Borinstein, Helsa/Thompson, Robin/Gollan, Tamar 2008 Bimodal Bilingualism. In: Bilingualism: Language and Cognition 11, 43⫺61.
993
994
VIII. Applied issues Forestal, Eileen 2005 The Emerging Professionals: Deaf Interpreters and Their Views and Experiences of Training. In: Marschark, Marc/Peterson, Rico/Winston Elizabeth (eds.), Sign Language Interpreting and Interpreter Education: Directions for Research and Practice. Oxford: Oxford University Press, 235⫺258. Fox Tree, Erich 2009 Meemul Tziij: An Indigenous Sign Language Complex of Mesoamerica. In: Sign Language Studies 9(3), 324⫺366. Goldin-Meadow, Susan 2003 The Resilience of Language: What Gesture Creation in Deaf Children Can Tell Us About How All Children Learn Language. New York: Psychology Press. Green, David/Schweda Nicholson, Nancy/Vaid, Jyotsna/White, Nancy/Steiner, Richard 1990 Hemispheric Involvement in Shadowing vs. Interpretation: A Time-Sharing Study of Simultaneous Interpreters with Matched Bilingual and Monolingual Controls. In: Brain and Language 39(1), 107⫺133. Groce, Nora Ellen 1985 Everyone Here Spoke Sign Language: Hereditary Deafness on Martha’s Vineyard. Cambridge, MA: Harvard University Press. Grosjean, François 1997 The Bilingual Individual. In: Interpreting 2(1/2), 163⫺187. Hall, Nigel 2004 The Child in the Middle: Agency and Diplomacy in Language Brokering Events. In: Hansen, Gyde/Malmkjær, Kirsten/Gile, Daniel (eds.), Claims, Changes and Challenges in Translation Studies. Amsterdam: Benjamins, 258⫺296. Hay, John 2008 Deaf Interpreters Throughout History. Paper Presented at the Association of Sign Language Interpreters (ASLI) Conference, London. Hermann, Alfred 1956 Interpreting in Antiquity. In: Pöchhacker, Franz/Shlesinger, Miriam (eds.), The Interpreting Studies Reader. London: Routledge, 15⫺22. Hitchcock, Tim/Shoemaker, Robert 2008 The Proceedings of the Old Bailey. [Available from: http://www.oldbaileyonline.org, case reference t17710703⫺17; retrieved January 2008] Inghilleri, Moira 2003 Habitus, Field and Discourse: Interpreting as Socially Situated Activity. In: Target 15(2), 243⫺268. Jepson, Jill 1991 Urban and Rural Sign Language in India. In: Language in Society 20, 37⫺57. Johnson, Robert E. 1991 Sign Language, Culture, and Community in a Traditional Yucatec Maya Village. In: Sign Language Studies 73, 461⫺474. Ladd, Paddy 2003 In Search of Deafhood. Clevedon: Multilingual Matters. Lee, Raymond 2004 A Beginner’s Introduction to Deaf History. Feltham, UK: British Deaf History Society Publications. Llewellyn-Jones, Peter 1981 Simultaneous Interpreting. In: Woll, Bencie/Kyle, Jim/Deuchar, Margaret (eds.), Perspectives on British Sign Language and Deafness. London: Croom Helm, 89⫺103. Locker McKee, Rachel/Napier, Jemina 2002 Interpreting Into International Sign Pidgin: An Analysis. In: Sign Language & Linguistics 5(1), 27⫺54.
40. Interpreting López Gómez, Maria José/Bajo Molina, Teresa/Padilla Benítez, Presentación/Santiago de Torres, Julio 2007 Predicting Proficiency in Signed Language Interpreting: A Preliminary Study. In: Interpreting 9(1), 71⫺93. Malcolm, Karen 2005 Contact Sign, Transliteration and Interpretation in Canada. In: Janzen, Terry (ed.), Topics in Signed Language Interpreting. Amsterdam: Benjamins, 107⫺133. Marsaja, I Gede 2008 Desa Kolok ⫺ A Deaf Village and Its Sign Language in Bali, Indonesia. Nijmegen: Ishara Press. Marschark, Marc/Sapere, Patricia/Convertino, Carol/Seewagen, Rosemarie/Maltzen, Heather 2004 Comprehension of Sign Language Interpreting: Deciphering a Complex Task Situation. In: Sign Language Studies 4(4), 345⫺368. McIntire, Marina (ed.) 1986 Interpreting: The Art of Cross-Cultural Mediation. Proceedings of the 9th RID National Convention. Silver Spring, MD: RID Publications. Metzger, Melanie 1999 Sign Language Interpreting: Deconstructing the Myth of Neutrality. Washington, DC: Gallaudet University Press. Metzger, Melanie/Quadros, Ronice M. de 2011 Cognitive Control in Bimodal Bilingual Sign Language Interpreters. Paper Presented at the Workshop “Text: Structures and Processing” at the 33rd Annual Conference of the German Linguistic Society (DfGS), Göttingen. Mikkelson, Holly 1996 The Professionalization of Community Interpreting. In: Jérôme-O’Keeffe, Muriel (ed.), Global Vision: Proceedings of the 37 th Annual Conference of the American Translators Association. Alexandria, VA: American Translators Association, 77–89. Miles, Michael 2000 Signing in the Seraglio: Mutes, Dwarfs and Jestures at the Ottoman Court 1500⫺1700. In: Disability & Society 15(1), 115⫺134. Miles, Michael 2004 Locating Deaf People, Gesture and Sign in African Histories, 1450s⫺1950s. In: Disability & Society 19(5), 531⫺545. Moody, Bill 1994 International Sign: Language, Pidgin or Charades? Paper Presented at the Issues in Interpreting 2 Conference, University of Durham. Moorhead, David 1991 Social Work and Interpreting. In: Gregory, Susan/Hartley, Gillian (eds.), Constructing Deafness. London: Pinter, in Association with the Open University, 259⫺264. Morales, Alejandro/Hanson, William E. 2005 Language Brokering: An Integrative Review of the Literature. In: Hispanic Journal of Behavioral Sciences 27(4), 471⫺503. Moser-Mercer, Barbara/Lambert, Sylvie 1994 Bridging the Gap: Empirical Research in Simultaneous Interpretation. Amsterdam: Benjamins. Napier, Jemina 2002 Sign Language Interpreting: Linguistic Coping Strategies. Coleford: Douglas McLean. Napier, Jemina 2005 A Time to Reflect: An Overview of Signed Language Interpreting, Interpreter Education and Interpreting Research. In: Locker McKee, Rachel (ed.), Proceedings of the Inaugural Conference of the World Association of Sign Language Interpreters. Coleford: Douglas McLean, 12⫺24.
995
996
VIII. Applied issues Napier, Jemina (ed.) 2009 International Perspectives on Sign Language Interpreter Education. Washington, DC: Gallaudet University Press. Napier, Jemina/Locker McKee, Rachel/Goswell, Della 2006 Sign Language Interpreting: Theory and Practice in Australia and New Zealand. Sydney: Federation Press. Napier, Jemina/Rohan, Meg/Slatyer, Helen 2005 Perceptions of Bilingual Competence and Preferred Language Direction in Auslan/ English Interpreters. In: Journal of Applied Linguistics 2(2), 185⫺218. Nicodemus, Brenda 2009 Prosodic Markers and Utterance Boundaries in American Sign Language Interpretation. Washington, DC: Gallaudet University Press. Nyst, Victoria 2007 A Descriptive Analysis of Adamorobe Sign Language (Ghana). PhD Dissertation, University of Amsterdam. Utrecht: LOT. Ozolins, Uldis/Bridge, Marianne 1999 Sign Language Interpreting in Australia. Melbourne: Language Australia. Padden, Carol 2000 Simultaneous Interpreting Across Modalities. In: Interpreting 5(2), 171⫺187. Preston, Paul 1996 Chameleon Voices: Interpreting for Deaf Parents. In: Social Science & Medicine 42, 1681⫺1690. Price, Cathy/Green, David/Studnitz, Roswitha von 1999 A Functional Imaging Study of Translation and Language Switching. In: Brain 122(12), 2221⫺2235. Rosenstock, Rachel 2008 The Role of Iconicity in International Sign. In: Sign Language Studies 8(2), 131⫺159. Roy, Cynthia B. 1999 Interpreting as a Discourse Process. New York: Oxford University Press. Russell, Debra 2002 Interpreting in Legal Contexts: Consecutive and Simultaneous Interpretation. Burtonsville, MD: Linstok Press. Pepys, Samuel 1666 The Diary of Samuel Pepys (November 1966). Ed. by Henry Benjamin Wheatley. Project Gutenberg Release #4200. [Available from: http://onlinebooks.library.upenn. edu/webbin/gutbook/lookup?num=4169; retrieved 24th November 2011] Quinn, Gary 2010 Schoolization: an Account of the Origins of Regional Variation in British Sign Language. In: Sign Language Studies 10(4), 476⫺501. Quinto-Pozos, David 2005 Factors that Influence the Acquisition of ASL for Interpreting Students. In: Marschark, Marc/Peterson, Rico/Winston, Elizabeth (eds.), Sign Language Interpreting and Interpreter Education: Directions for Research and Practice. Oxford: Oxford University Press, 159⫺187. Ragir, Sonia 2002 Constraints on Communities with Indigenous Sign Languages: Clues to the Dynamics of Language Genesis. In: Wray, Alison (ed.), The Transition to Language. Studies in the Evolution of Language. Oxford: Oxford University Press, 272⫺294. Sandler, Wendy 1999 Prosody in Two Natural Language Modalities. In: Language and Speech 42, 127⫺142. Scott-Gibson, Liz 1991 Sign Language Interpreting: An Emerging Profession. In: Gregory, Susan/Hartley, Gillian (eds.), Constructing Deafness. London: Pinter in Association with the Open University, 253⫺258.
40. Interpreting Scott-Gibson, Liz/Ojala, Raili 1994 International Sign Interpreting. Paper Presented at the Fourth East and South African Sign Language Seminar, Uganda. Simpson, Stewart 1991 A Stimulus to Learning, a Measure of Ability. In: Gregory, Susan/Hartley, Gillian (eds.), Constructing Deafness. London: Pinter in Association with the Open University, 217⫺ 226. Simpson, Stewart 2007 Advance to an Ideal: The Fight to Raise the Standard of Communication Between Deaf and Hearing People. Edinburgh: Scottish Workshop Publications. Siple, Linda A. 1998 The Use of Addition in Sign Language Transliteration. In: Weisel, Amatzia (ed.), Issues Unresolved: New Perspectives on Language and Deaf Education. Washington, DC: Gallaudet University Press, 65⫺75. Skutnabb-Kangas, Tove 1981 Bilingualism or Not: The Education of Minorities. Clevedon: Multilingual Matters. Smith, Theresa B. 1996 Deaf People in Context. PhD Dissertation, University of Washington. Solow, Sharon 1981 Sign Language Interpreting: A Basic Resource Book. Silver Spring, MD: National Association of the Deaf. Steiner, Ben 1998 Signs from the Void: The Comprehension and Production of Sign Language on Television. In: Interpreting 3(2), 99⫺146. Stone, Christopher 2008 Whose Interpreter Is She Anyway? In: Roy, Cynthia (ed.), Diversity and Community in the Worldwide Sign Language Interpreting Profession: Proceedings of the 2nd Conference of the World Association of Sign Language Interpreters, held in Segovia, Spain, 2007. Coleford: Douglas McLean, 75⫺88. Stone, Christopher 2009 Towards a Deaf Translation Norm. Washington, DC: Gallaudet University Press. Stone, Christopher in press The UNCRPD and ‘Professional’ Sign Language Interpreter Provision. In: Schaeffner, Christina (ed.), The Critical Link 6: Interpreting in a Changing Landscape. Amsterdam: Benjamins. Stone, Christopher/Adam, Robert 2008 Deaf Interpreters in the Community ⫺ The Missing Link? Paper Presented at the CIT Conference, Puerto Rico. Stone, Christopher/Woll, Bencie 2008 DUMB O JEMMY and Others: Deaf People, Interpreters and the London Courts in the 18th and 19th Centuries. In: Sign Language Studies 8(3), 226⫺240. Takagi, Machiko 2005 Sign Language Interpreters of Non-English Speaking Countries Who Support International Activities of the Deaf. In: Locker McKee, Rachel (ed.), Proceedings of the Inaugural Conference of the World Association of Sign Language Interpreters. Coleford: Douglas McLean, 25⫺31. Thomas, Esther 2008 Interview with Clara Allardyce. In: NEWSLI, July 2008, 18. WASLI (World Association of Sign Language Interpreters) 2006 Joint Statement of WASLI and WFD (adopted on 23/01/2006). [Available from: http:// www.wasli.org/joint-agreements-p21.aspx; retrieved December 2011]
997
998
VIII. Applied issues Wadensjö, Cecelia 1998 Interpreting as Interaction. London: Longman. Woll, Bencie 1988 Report on a Survey of Sign Language Interpreter Training and Provision Within the Member Nations of the European Community. In: Babel 34(4), 193⫺210. Woll, Bencie/Ladd, Paddy 2010 Deaf Communities. In: Marschark, Marc/Spencer, Patricia E. (eds.), Oxford Handbook of Deaf Studies, Language, and Education (2nd Edition). Oxford: Oxford University Press, 159⫺172. Xiaoyan, Xiao/Ruiling, Yu 2009 Survey on Sign Language Interpreting in China. In: Interpreting 11(2), 137⫺163.
Christopher Stone, London (United Kingdom)
41. Poetry 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
Introduction Definition(s) of sign language poetry Sources of sign language poetry Sign language poets Purposes of sign language poetry Genres within the poetic genre Figurative poetic language Visual creativity Repetition, rhythm, and rhyme Conclusion Literature
Abstract This chapter explores some linguistic, social, and cultural elements of sign language poetry. Taking sign language poetry to be a language art-form recognised by its community of users as somehow noticeably ‘different’ and poetic, I identify characteristics of signing poets and the cultural, educational, and personal uses of sign language poetry. Genres of signed poetry include signed haiku and short narrative poems, as well as ‘lyric’ poems. Deliberately ‘deviant’ creation of meaning is seen where figurative language is used extensively in signed poems, especially as language form and meaning interact to produce metaphors, similes, and hyperbolic forms, while ‘deviant’ creation of highly visual new signs draws further attention to the poetic language. Noticeable elements of repetition occur at the grammatical, sign, and sub-sign levels to create additional poetic effect.
41. Poetry
1. Introduction This chapter will consider some general points about the function, content, and form of sign language poetry and the contexts in which it occurs. It highlights linguistic, sociolinguistic, and cultural features of the art form, offering some definitions of sign language poetry and reviewing its origins and purpose. A brief examination of some of the different types of poems within the poetic genre is followed by a description of the form of language used in the poems, including figurative and visually creative language and repetition, rhyme, and rhythm in signs. Examples are drawn from poems in a range of sign languages, showing the similarities (and some differences) in sign language poetry across languages. Many of the illustrative examples come from poems that are available to wider audiences either through the Internet or commercial DVDs with a reliable source. Readers should note that sign language poems often have no permanent record, being more like poems in oral traditions. Dates of performances or published recordings may not reflect their time of composition. The dates given here for some poems are those of recordings on commercial video or DVD format or of poems available on the Internet, especially at www.bristol.ac.uk/bslpoetryanthology (a list of the poems mentioned in this chapter that are available at this website is provided in section 11). Other poems mentioned here have been performed live or have been recorded by individuals but the recordings are not available commercially. It is important to remember that sign language poetry is enjoyable. It is often fun. Signed poems frequently make people smile, laugh, and applaud and make deaf children giggle with delight. It is a very positive, celebratory aspect of life for communities of people whose daily experiences are not always easy. Any analysis or observations on sign language poetry should serve to highlight why this is so. The more we understand the workings and sheer genius of this language art form, the richer our lives become ⫺ Deaf or hearing, signer or non-signer.
2. Definition(s) of sign language poetry Defining poetry is probably impossible. Perhaps the one defining feature of any poetry is that it defies definition. It may be just as naïve and fruitless to seek a definition of sign language poetry, beyond saying that “we know a poem when we see one”. However, even this approach is strongly related to culture and literacy, as we will see later, because many Deaf people have little exposure to the poetry of their own language and even less education in the way to appreciate and engage with it. Given this, there may be considerable disagreement over what is and is not a sign language poem, but there are certain elements of form, content, and function of poetry that many Deaf poets, at least, appear to agree upon. In a general sense, it is understood that sign language poetry is the ‘ultimate’ in aesthetic signing, in which the language used is as important as ⫺ or even more important than ⫺ the message. Poetic language in general stems from everyday language but deviates from it so that the language itself stands out in the foreground of the utterance, increasing its communicative power beyond simple propositional meaning (see, for example, Leech 1969). Sign language poetry is an art form that entertains and
999
1000
VIII. Applied issues
educates members of the Deaf community ⫺ creating, challenging, and strengthening the bonds within the community. Beyond these rather general definitions, however, lies the challenge of specifying how all this is realised. Sign language folklore plays a key role in poetry. In American Sign Language (ASL), signlore (Carmel 1996) includes stories and games using the manual alphabet and numbers, sign language puns, and deliberate blending of different signs. These elements may all be seen in some signed poems. Different Deaf communities have their own signlore (see, for example, Smith/Sutton-Spence 2007), but it appears that signlore is a major part of any genre of creative art signing such as narrative, anecdote, and jokes, and that there is no clear dividing line between these and poems (especially narrative poems, poems that give brief stories of incidents or humorous poems). Because poetic elements of sign language may be seen in other creative language forms, what one person calls a poem, another person might see as a different genre containing poetic language. The ASL poet Clayton Valli (1993) remarked that differences between ASL poetry and prose were matters of degree. Richard Carter, a British Sign Language (BSL) poet and storyteller, explained his view (in a seminar at the University of Bristol, 2008) that stories are more likely to express events or ideas realistically and more literally than poems do. Stories use altered signs much less than poems, where the events or ideas are expressed more metaphorically and with more attention to the language. Richard offered the following examples: In a story involving a bear in a zoo, the bear might fingerspell normally, but in my poem [Sam’s Birthday], the bear fingerspells using bear-paw clawed handshapes. In my poem Looking for Diamonds, I use the slow-motion technique to show running towards the diamond but in a story, I would just sign running normally. The slow-motion is poetic ⫺ more dreamlike. (Author’s translation)
Poetry is a cultural construction and Deaf people’s ideas about the form and function of sign language poetry do not necessarily coincide with those of hearing people in their surrounding social and literary societies. Deaf people around the world may also differ in their beliefs about what constitutes sign language poetry. They may even deny that their culture has poetry. This may be especially so where Deaf people’s experiences of poetry have been overwhelmingly of the written poetry of a hearing majority. For a long time, Deaf people have been surrounded by the notion that spoken languages are for high status situations and that ‘deaf signing’ is inferior and only fit for social conversation. Poetry has been seen as a variety of language that should be conducted in spoken language, because of its status. What is considered acceptable or relevant as poetry varies with time and different perspectives of different poets and audiences. For example, opinions are divided over whether signed songs are a valid form of signed poetry. Signed songs (such as hymns or pop songs) are signed translations of song lyrics, performed in accompaniment to the song. Rhythm (a key feature of signed poetry, see section 9.1) is a strong component in signed songs but this is driven by the song, not by the sign language. Where translations are more faithful to the words of the song, there is little creation of the strong visual images often considered important in original sign language poetry, but other performers of signed songs may bring many of the features of creative language into their translations such as metaphorical movement and blending (see sections 7 and 8).
41. Poetry
1001
Signed songs are particularly enjoyed by audiences with some experience of the original songs ⫺ either because they have some hearing or because they heard the songs before becoming deaf. The performances are clearly poetic. The debate, however, is whether they belong to the genre of sign language poetry. They will not be discussed further here. ‘Deaf poetry’ and ‘sign language poetry’ are not necessarily the same thing. Both are composed by people for whom sound-based language is essentially inaccessible but while sign language poetry is normally composed and performed by Deaf people, not all deaf poets compose and perform in sign language, preferring instead to write poetry in another language. Poetry that is written by deaf people is clearly different in its form (composed in a two-dimensional static form rather than a three-dimensional kinetic form), is frequently different in its content, and often has a very different function than sign language poetry. Despite observations that some written poetry by deaf people can take positive approaches to being deaf and the visual world, there is no doubt that there are more references to sound and its loss in deaf-authored written than in signed poetry (Esmail 2008). We will see that themes in sign language poetry usually emphasise the visual world and the beauty of sign language (Sutton-Spence 2005). Loss and negative attitudes towards deafness are far less common. That said, signed poetry by some young Deaf people addresses themes of resentment toward being deaf, and it is important to acknowledge this.
2.1. Historical change Historical records in France and the USA mention performances of sign language poetry at the large Deaf Banquets of the 19th century, although it is not clear what form these poems took (Esmail 2008). Early ASL “poetry” recorded in films from the mid 20th century was often purely rhythmic “percussion signing” in which simple signs or phrases were repeated in different rhythms, often in unison (Peters 2000). These were often performed at club outings. In Britain, for most of the 20th century, sign language poetry competitions were the venue for performance of the art form, and most poems were translations of English poetry, for which highly valued skills in English were required. There was also a tendency for the performances of these poems to incorporate large, dramatic, or “theatrical” gestures or styles in the signing. This link between English poetry and signed poetry may account for some of the attitudes that many older British Deaf people still hold towards sign language poetry. In the 1970s, the pioneering Deaf poet Dorothy (‘Dot’) Miles began experimenting with creating ASL poetry along poetic principles in the sign language ⫺ using repetition of rhythm and sub-sign parameters such as handshape, movement, or location (see chapter 2, Phonology, for details). Although this was an important development in signed poetry, many of her earlier works were composed simultaneously in English and a sign language (either ASL or BSL), and it was nearly two decades before sign language stood alone in her poems. As Dorothy’s work loosened its ties with English, she began building more visually creative elements into her poetry, by using more poetic structures of space, balance, and symmetry, and by making more creative use of classifiers and role shift. Clayton Valli’s 1993 analysis of ASL poetry also used theories derived from English poetry (although his work often contained many forms unrelated
1002
VIII. Applied issues
to English poetics). Contemporary poets around the world like Paul Scott, Wim Emmerik, Nelson Pimenta, and Peter Cook rely far less on poetic principles shared with their national written language. However, in Britain at least, some children and younger Deaf people are again playing with mixed forms of English and BSL to create poetry (see, for example, Life and Deaf, 2006), and current British Deaf poets draw on a range of these poetic and linguistic devices to create powerful visual representation of imagery. Video technology has also been highly significant in the linguistic and cultural development of signed poetry. It is possible to see a division of “pre-video” creative sign and “post-video” sign language poetry (Rose 1994). Video allows poets to refine their work, creating a definitive “text” or performance of that text that is very complex, and others can review it many times, unpacking dense meanings from within the rich language in a way that was not possible when the only access to a poem was its live performance. Krentz (2006) has argued that the impact of video on sign language literature has been as significant as that of the printing press upon written literature. As well as allowing work to be increasingly complex, video has greatly expanded the impact of sign language poetry through its permanence and widespread distribution, and shifted the “ownership” of works away from the community as a whole towards individual poets who compose or perform poems in the videos.
2.2. Themes and content Theme may determine whether or not people consider something poetry. For example, in some circles in the USA in the 1990s, a creative piece was not considered to be ASL poetry unless it was about Deaf identity. Almost two decades later, sign language poetry can be seen in any topic. This is further evidence that sign language poetry, like other poetry, changes over time as poets and audiences develop different expectations and attitudes. Despite the move away from a focus on specifically Deaf-related themes, Christie and Wilkins (2007) found that over half of the ASL poems they reviewed in their corpus could be interpreted as having themes relating to Deaf identity, including Deaf resistance to oppression and Deaf liberation. Similarly, in the poems collected in 2009⫺ 2010 for the online BSL poetry anthology, just over a half had Deaf protagonists or characters and thus could be seen to address “Deaf issues” including Deaf education, sign language, and Deaf community resistance. Nevertheless, in terms of content, sign language poems also often explore the possibly universal “Big Issues” explored by any poetry: the self, mortality, nationality, religion, and love. Examples of poems carrying any of these themes may be seen in the online BSL poetry anthology. Sign language poetry tackles the themes from the perspective of a Deaf person and/or their Deaf community, using an especially visual Deaf take on them, often showing the world from a different perspective. Morgan (2008, 25) has noted “There are only two questions, when you come down to it. What is the nature of the world? And how should we live in it?” Deaf poets ask, in paraphrase, “What is the nature of the Deaf world? And how should Deaf people live in it?” Consequently, sign language poetry frequently addresses themes such as a Deaf person’s identity, Deaf people’s place in the world, Deaf values and behaviour, the ignorance of the hearing society, the visual and tactile
41. Poetry
1003
sensory Deaf life experience, and sign language. Questions of nationality, for example, may reflect on the place of a Deaf person within a political nation or may consider the worldwide Deaf Nation. Paul Scott’s BSL poem Three Queens (2006) and Nelson Pimenta’s Brazilian Sign Language poem The Brazilian Flag (2003) both consider the poets’ political, historical, and national heritage from a Deaf perspective (SuttonSpence/de Quadros 2005). Dorothy Miles’ ASL poem Word in Hand (Gestures, 1976) explores membership of the World Deaf Nation for any deaf child in any country. Clayton Valli’s ASL poem Deaf World (1995) considers all worlds, rejecting the hearing world in favour of a world governed by a Deaf perspective. Most sign language poetry (at least in Britain and several other countries whose sign language poetry I have been privileged to see) is ‘positive’ ⫺ optimistic, cheerful, celebratory, and confident, showing pride in being Deaf and delight in sign language. While some poems may be considered ‘negative’ ⫺ referring to problems of oppression, frustration, and anger ⫺ they often deal with these issues in a positive or at least humorous way, and even the angry poems are frequently very funny, albeit often with a rather bleak, dark humour. Signed poetry frequently addresses issues of sign language or the situation of the Deaf community. Very rarely do we see laments for the loss of hearing, although young people may refer to this more (for example, some children’s poems in Life and Deaf, 2006). We are more likely to see teasing or objecting to the behaviour of hearing people, and this includes poems about fighting against oppression. Poems are concerned with celebrating sign language and what is valued in the daily Deaf experience: such as sight, communication, and Deaf togetherness. In BSL, Paul Scott’s Three Queens (2006) and Dorothy Miles’ The Staircase (1998), for example, show these elements clearly. Celebration of Deaf success and discovering or restating identity, at both the collective and individual levels, is also important, although some poems may issue challenges to members of Deaf communities. For example, Dorothy Miles’ poem Our Dumb Friends (see Sutton-Spence 2005) exhorts community members to stop infighting. Many poems do all this through extended metaphor and allegories.
2.3. Formal aspects of sign language poetry The form of language used in sign language poetry is certainly one of its key identifying features. As the ultimate form of aesthetic signing, poetry uses language highly creatively, drawing on a wide range of language resources (which will be considered in more detail below), such as deliberate selection of sign vocabulary sharing similar parameters, creative classifier signs, role shift (or characterisation), specific use of space, eye-gaze, facial expressions, and other non-manual features. In addition, repetition and the rhythm and timing of signs are frequently seen as crucial to signed poetry. It is not easy to determine which of these elements are essentially ‘textual’ (that is, inherent to the language used) and which are performance-related. In written poetry, separation of text and performance might be important and is mostly unproblematic, but in sign language poetry, they are so closely interlinked that it is perhaps counterproductive to seek distinctions. Deaf poets and their audiences frequently mention the importance of poetic signing being ‘visual’ and this highly visual effect can be achieved through both the text and the performance. Clearly, all sign language is visual because
1004
VIII. Applied issues
it is perceived by the eye, but the specific meaning here alludes to the belief that the signer must create a strong visual representation of the concepts, in order to produce a powerful visual image in the mind of the audience. Different styles of poem are considered appropriate for audiences of different ages. Many deaf poets attach great importance to sharing poetry with deaf children and encouraging the children to create their own work. Younger children’s poetry focuses more on elements of repetition and rhythm and less on metaphor. Older children may be encouraged to play with the ambiguous meaning of classifier signs, presenting alternative interpretations of the size and identity of the referent. The richer metaphorical uses of signed poetry, however, are considered more appropriate for older audiences or for those with some more advanced understanding or “literacy” of ways to appreciate poetry (Kuntze 2008). We cannot expect audiences with no experience of sign language poetry to understand signed poems and make inferences in the way intended by the poets. Such literacy needs to be taught and many Deaf audiences still feel a little overwhelmed by, alienated from, or frankly bored by sign language poetry. A member of the British Deaf community recently claimed to us that sign language poetry was only for “Clever Deaf”. In fact, far from being only for Clever Deaf, sign language poetry is part of the heritage of the signed folklore seen in any Deaf Club.
3. Sources of sign language poetry As was mentioned above in relation to signlore, many of the roots of sign language poetry lie in Deaf folklore. Bascom (1954, 28) has observed that “[i]n addition to the obvious function of entertainment and amusement, folklore serves to sanction the established beliefs, attitudes and institutions, both sacred and secular, and it plays a vital role in education in nonliterate societies”. Folklore transmits culture down the generations, provides rationalisation for beliefs and attitudes if they are questioned, and can be used to put social pressure on deviants from social norms. It may be said that many of the functions of sign language poetry are identical, because they stem from the same source. Some poetic forms may spread and be shared with other Deaf communities; for example, there is some evidence that some ABC games that originated in ASL have been adopted and adapted by other Deaf communities around the world. However, it appears that sign language poets mostly draw upon those language and thematic customs that can be appreciated by their own audiences. Sign language poetry has also grown out of specific language learning environments. Sometimes poetry is used for second-language learning in adults. Poetry workshops have also been used to encourage sign language tutors to think more deeply about sign language structure in order to aid their teaching. Signing poetry to Deaf children is a powerful way to teach them about sign language in an education system where any formal sign language study is often haphazard at best. Many Deaf poets (including Paul Scott, Richard Carter, John Wilson, Peter Cook, and Clayton Valli in the UK and USA) have worked extensively with children to encourage them to compose and perform sign language poetry. Poetry improves children’s confidence and allows them to express their feelings and develop their Deaf identity, by encouraging them to focus on elements of the language and play with the language form, exploring their language’s
41. Poetry
1005
potential. When children are able to perform their work in front of others, it further validates their language and gives them a positive sense of pride. Teachers also use poems to teach lessons about Deaf values. In a narrative poem by Richard Carter (discussed in more detail below), a signing Jack-in-a-Box teaches a little boy about temptation, self-discipline, guilt, and forgiveness.
4. Sign language poets Sign language poetry is usually composed by members of the Deaf community. Fluent hearing signers might compose and perform such poetry (for example, Bauman’s work referred to in Bauman/Nelson/Rose 2006), and language learners might do so as part of their exploration of the language (Vollhaber 2007). However, these instances are rare and peripheral to the core of Deaf-owned sign language poetry. A belief in the innate potential for any deaf signer to create sign language poetry spurs Deaf teachers to bring sign language poetry to deaf children, and organisers to hold festivals such as the BSL haiku festival where lay signers can watch established poets and learn poetry skills for themselves. Nevertheless, it is clear that some signers in every community have a specific poetic gift. They are the ones sought out at parties and social events to tell jokes or stories or to perform. Rutherford (1995) describes them as having “the knack”; Bahan (2006) has called them “smooth signers”. Some British Deaf people say that a given signer “has beautiful signing” or simply “has it”. These signers may not immediately recognise their own poetic skills but they come to be accepted as poets. Sometimes this validation comes from others in the community but it may also come from researchers at universities ⫺ if analysis of your poetic work merits the attention of sign language linguistic or cultural research, then it follows that you must be a poet. Sometimes it comes from invitations to perform at national or international festivals or on television or from winning a sign language poetry competition. Sometimes it is simply a slow realisation that poetry is happening within. It might be expected that Deaf poets would come from Deaf families, where they grew up with the greatest exposure to the richest and most diverse forms of sign language. Indeed, some recognised creative signers and poets do have Deaf parents who signed to them as children. However, many people recognised as poets did not grow up with sign language at home. Clive Mason, a leader in the British Deaf community (who did not grow up with sign language at home), has suggested to me why this might be the case (personal communication, December 2008). There are two types of creative sign language ⫺ that which has been practised to perfection and performed, and that which is more spontaneous. Clive suggested that the former is characteristic of people from Deaf families who grew up signing, while the latter is seen in people whose upbringing was primarily oral. Signers with Deaf parents might be more skilled creatively in relation to the more planned and prepared performances of poetry because they have been exposed to the rules and conventions of the traditional art forms. Wellknown and highly respected American Deaf poets, such as Ben Bahan and Ella Mae Lentz, and some British poets, including Ramon Wolfe, Judith Jackson, and Paul Scott, grew up in signing households. On the other hand, signers with an oral upbringing are skilled in the spontaneous creativity simply because it was the sole creative language
1006
VIII. Applied issues
option when they had no exposure to the traditional art forms. Recognised poets who grew up without signing Deaf parents include Dot Miles, Richard Carter, John Wilson, and Donna Williams in Britain, and Nigel Howard, Peter Cook, and Clayton Valli in North America. Many sign language poets work in isolation but increasing numbers of workshops and festivals, both nationally and internationally, allow them to exchange ideas and see each other’s work. Video recordings of other work, especially now via the Internet, also allow the exchange of ideas. These developments mean that we can expect more people to try their hand at poetry and for that poetry to develop in new directions.
5. Purposes of sign language poetry Poetry can be seen as a game or a linguistic ‘luxury’, and its purpose can be pure enjoyment. For many Deaf poets and their audiences, that is its primary and worthy aim. It is frequently used to appeal to the senses and the emotions, and humour in signed poems is especially valued. Sometimes the humour may be “dark”, highlighting a painful issue but, as was mentioned above (section 2.2), it is achieved in a safely amusing way. For many Deaf people, much of the pleasure of sign language poetry lies in seeing their language being used creatively. It strengthens the Deaf community by articulating Deaf Culture, community, and cultural values, especially pride in sign language (Sutton-Spence 2005; Sutton-Spence/de Quadros 2005). Poetry shows the world from new and different perspectives. Sign language poetry is no exception. The BSL poet John Wilson described how his view of the world changed the first time he saw a BSL poem at the age of 12 (Seminar at Bristol University, February 2007). Until then, he felt as though he saw the world through thick fog. Seeing his first BSL poem was like the fog lifting and allowing him to see clearly for the first time. He recalls that it was a brief poem about a tree by a river but he felt like he was part of that scene, seeing and experiencing the tree and river as though they were there. He laughed with the delight of seeing the world clearly through that poetic language for the first time at 12 years old. Sign language poems also empower Deaf people to realise themselves through their creativity. Many sign poets tell of their use of poetry to express and release powerful feelings that they cannot express through English. Dorothy Miles wrote in some unpublished notes in 1990 that one aim for sign language poetry is to satisfy “the need for self-identity through creative work”. Poets can gain satisfaction from having people pay attention to them. Too often Deaf people are ignored or marginalized (and this is especially true in childhood), so some poets are fulfilled by the attention paid to them while performing. Richard Carter’s experience reveals one personal path to poetry that supports Dorothy’s claim and which may resonate with other Deaf poets. In a seminar given to post-graduate students at Bristol University (February 2008), he explained: I have a gift. I really believe it comes from all my negative experiences before and the frustrations they created within me. I resolved them through poetry in order to make myself feel good. I think it’s because when I was young, I didn’t get enough attention. […] So I tried to find a way to get noticed and I feel my poetry got me the attention.
41. Poetry
1007
Beyond self-fulfilment, however, many poets frequently mention the wish to show other people ⫺ hearing people, Deaf children, or other members of the Deaf community ⫺ what sign language poetry can do. For hearing non-signers, the idea that poetry is possible in sign languages is an eye-opener. Hearing people can be shown the beauty and complexity of sign language and, through it, learn to respect Deaf culture and Deaf people. In many cases, sign language poetry shows a Deaf worldview, considering how the world might be if it were a Deaf world (see Bechter 2008). Bechter argues that part of the Deaf cultural worldview is that their world is ⫺ or should be ⫺ made of Deaf lives. Consequently, the job of a Deaf storyteller or poet is to see or show Deaf lives where others might not see them. The key linguistic and metaphorical devices for creating visions of these alternative realities are role shift and anthropomorphism (or personification). Role shift in these ‘personification’ pieces uses elements that Bechter identifies as being central to the effect ⫺ lack of linear syntax or citation signs, and extensive use of “classifier-expressions, spatial regimentation and facial affect” (2008, 71). Role shift blurs the distinction between text and performance considerably. The elements used allow poets to closely mimic the appearance and behaviour of people described in the poems. Importantly, role shift frequently allows the poet to take on the role of non-human entities and depict them in a novel and entertaining way. By “becoming” the entity, the poet highlights how the world would be if these non-human entities were not just human, but also deaf, seeing the world as a visual place and communicating visually (see chapter 17, Utterance Reports and Constructed Action, for details). A BSL sign sometimes used in relation to Deaf creativity, including comedy and poetry, is empathy ⫺ a sign that might be glossed as ‘change places with’, in relation to the poet or comedian changing places with the creature, object, or matter under discussion. Empathy is the way that Deaf audiences enjoy relating to the performer’s ideas. As the character or entity comes to possess the signer’s body, we understand we are seeing that object as a Deaf human.
6. Genres within the poetic genre Given that much of the source of sign language poetry lies in Deaf folklore, we might expect the poetic genres in sign languages to reflect the language games and stories seen there. Indeed, some pieces presented as ASL poems are within the genres of ABC-games, acrostics (in which the handshape of each sign corresponds to a letter from the manual alphabet, spelling out a word), and number games seen in ASL folklore. Clayton Valli, for example, was a master of such poems, essentially building on these formats with additional rhythm or creativity in sign use. Poetic narratives are also the sources for longer pieces that might be termed (for a range of reasons) narrative poems. Traditions of poetry from other cultures also influence sign language poetry so that genres there may influence genres in sign language poems (especially haiku, described below). Established ideas of form in English poetry lay behind many of Dorothy Miles’ sign language poems. In a television interview for Deaf Focus in 1976, she explained:
1008
VIII. Applied issues I am trying […] to find ways to use sign language according to the principles of spoken poetry. For example, instead of rhymes like ‘cat’ and ‘hat’, I might use signs like wrong and why, with the same final handshape. [in this ASL case, the d-handshape]
Many sign language poems today are essentially “lyric poems” ⫺ short poems, densely packed with images and often linguistically highly complex. Additionally, Beat poetry, Rap, and Epic forms have all been used in sign language poems. However, perhaps the most influential “foreign” genre has been the haiku form (Kaneko 2008). Haiku originated in Japan as a verse form of seventeen syllables. Adherence to the syllable constraints is less important in modern English-language haiku, where it is more important that the poem should be very brief and express a single idea or image and stir up feelings. Haiku is sometimes called “the six second poem”. Haiku’s strong emphasis on creating a visual image makes sign language an ideal vehicle for it. Dorothy Miles defined haiku as “very short poems, each giving a simple, clear picture”. Of her poems in this genre, she wrote, “I tried to do the same thing, and to choose signs that would flow smoothly together” (1988, 19). The features that Dorothy identified appear to have become the “rules” for a signed haiku. Her four Seasons haiku verses (1976) ⫺ Spring, Summer, Autumn, and Winter ⫺ have been performed in ASL by other performers and were analysed in depth by Klima and Bellugi as part of their groundbreaking and highly influential linguistic description of ASL, The Signs of Language (1979). Their analysis of Summer, using their ideas of internal structure, external structure, and kinetic superstructure, is well worth reading. Signed haiku has subsequently given rise to signed renga, a collaboratively created and performed set of related haikustyle poems. Signed renga has spread internationally and has been performed in countries including Britain, Ireland, Sweden, and Brazil.
7. Figurative poetic language In traditional haiku, the direct representation of an image is presented literally, so figurative language such as metaphor is not usually considered appropriate. However, in many other sign language poems, figurative language is used to increase the communicative power of the signs in the poem. Metaphor, simile, hyperbole, and personification are all figurative devices used in sign language poetry. Reference has already been made to the importance of personification in sign language poetry. Hyperbole, or caricature, is seen as a central element of creative sign language, often shown through exaggerated facial expression. It is frequently a source of humour in poetry and often works in conjunction with personification, so that its skilled use is highly valued. Many signed poems and comments upon the figurative language within them may be found at http://www.bristol.ac.uk/bslpoetryanthology.
7.1. Metaphor Metaphor may be seen in many signed poems where the apparent topic in the poem does not coincide with the theme of the poem. When the content gives no direct clue
41. Poetry
1009
to the theme in a signed poem, its interpretation is dependent on the expectations of the audience, guided at times by the poet’s own explanations. For example, Clayton Valli’s ASL poem Dandelions describes dandelions that keep growing in a lawn, despite the gardener’s attempts to pull them out or mow them flat. Although there is no mention of deafness in the poem, signing audiences might understand that the content of the poem carries the theme of Deaf people’s resilience in the face of constant attempts to destroy their sense of themselves and the Deaf community. Paul Scott’s entertaining BSL poem The Tree (2006) ostensibly describes the life cycle of a tree, but he has explained that it is to be understood as a commentary on the belief that the Deaf community cannot be erased simply by cutting it down. The tree may be felled and dragged away, but seeds will grow again into another tree. Similarly, many of Dorothy Miles’ “animal poems” such as Elephants Dancing (1976) or The Ugly Duckling (1998) are only superficially about animals. Their themes address the situation of Deaf people in relation to the wider hearing society. In the former poem, Dorothy describes elephants that had been taught to “dance” for human entertainment by having their legs chained to inhibit their natural movement. She ends her English version of this poem with the lines “I hope one day to see/ Elephants dancing free”. Despite the focus on elephants and the lack of any reference to Deaf people, this is clearly intended to present an analogy with Deaf people being required to use speech for the satisfaction of others, rather than their natural mode of signs. She explained this in an introduction to a performance of the poem she gave in London in 1992. Even without this introduction, however, the use of sign language in the poem and the expectations of Deaf audiences would lead to this interpretation. The message of the well-known story of the Ugly Duckling can be interpreted in many ways, but when the story is presented as a sign language poem, most Deaf audiences will see it as the story of a Deaf child growing up among hearing people before finding a Deaf identity in the Deaf community. Not all Deaf audiences will bring the same expectations to a poem in sign language, however. Wilcox (2000), considering a signed poem about two dogs forced to cooperate because they are chained together, has observed that members of different national Deaf communities interpreted it in different ways according to their cultural experiences and beliefs: Deaf Americans, aware of divisions within their own community, suggested that the dogs stood for ASL users and Signed English users; Deaf Swiss-German people saw the dogs as deaf and hearing people who needed to work together; and Deaf Italians thought the dogs were people of different races, and not Deaf at all because they believed that a defining part of being Deaf is to be united. It is important to note that not all larger metaphorical themes in signed poems are specifically related to deafness. Dorothy Miles’ poem Hang Glider (1976) is about the fear that anyone ⫺ Deaf or hearing ⫺ might have when facing a new situation that promises great reward in success but great loss in failure. Paul Scott’s Too Busy to Hug is a warning to us all to open our eyes to the beauty of nature around us. Richard Carter’s Looking for Diamonds is about the search for enduring love ⫺ something both Deaf and hearing people might long for. Many conceptual and orientational metaphors (Lakoff/Johnson 1980) are similar in the thought processes and languages of several cultures. For example, many spoken and sign languages share widespread conceptual metaphors such as LIFE IS A JOURNEY and THE MIND IS A CONTAINER and orientational metaphors like GOOD
1010
VIII. Applied issues
IS UP and BAD IS DOWN (see, for example, Wilcox (2000); for further discussion, see chapter 18). These metaphors are exploited in sign language poetry through the use of symbolism in the formational parameters of signs. For example, signs in poems that move repeatedly upward may be interpreted as carrying positive meaning and downward signs carry negative connotations. Thus, as Taub (2001) has described, Ella Mae Lenz’s ASL poem The Treasure uses images of burying and uncovering treasure to describe appreciation of ASL ⫺ signs moving downward show negative views toward the language and signs that move up show positive views. Thanks, a poem in Italian Sign Language (LIS) by Giuranna and Giuranna (2000), contrasts downwardmoving signs when describing perceived shortcomings of the language with upwardmoving signs used to insist on the fine qualities of LIS. Kaneko (2011), exploring signed haiku, found a strong correlation between handshape and meaning in many of the poems she considered. Signs using an open handshape correlate with positive semantic meaning and those with a tense ‘clawed’ handshape tend to carry negative semantic meaning. This correlation is seen generally in the BSL lexicon (and I would expect from informal observation and remarks in publications on other sign languages that BSL is not unique in this respect). Using the Dictionary of BSL/English (1992), Kaneko calculated that of all the 2,124 signs listed, 7 % had a positive valence and 15 % had a negative valence (the remaining signs carried neutral meaning). However, the distribution of these semantic attributes was different for signs with different handshapes. Of the signs made with the fully open Poems by Professionals]
Rachel Sutton-Spence, Bristol (United Kingdom)
IX. Handling sign language data 42. Data collection 1. 2. 3. 4. 5. 6. 7. 8.
Introduction Introspection Data elicitation Sign language corpus projects Informant selection Video-recording data Conclusion Literature
Abstract This chapter deals with data collection within the field of sign language research and focuses on the collection of sign language data for the purpose of linguistic ⫺ mainly grammatical ⫺ description. Various data collection techniques using both introspection and different types of elicitation materials are presented and it is shown how the selection of data can actually have an impact on the research results. As the use of corpora is an important recent development within the field of (sign) linguistics, a separate section is devoted to sign language corpora. Furthermore, two practical issues that are more or less modality-specific are discussed, i.e. the problem of informant selection and the more technical aspects of video-recording the data. It is concluded that in general, publications should contain sufficient information on data collection and informants in order to help the reader evaluate research findings, discussions, and conclusions.
1. Introduction Sign language linguistics is a broad research field including several sub-disciplines, such as (both diachronic and synchronic) phonetics/phonology, morphology, syntax, semantics and pragmatics, sociolinguistics, lexicography, typology, and psycho- and neurolinguistics. Each sub-domain in turn comprises a wide range of research topics. For example, within sociolinguistics one can discern the linguistic study of language attitudes, bi- and multilingualism, standardisation, language variation and language change, etc. Furthermore, each sub-domain and research question may require specific types of data. Phonological and lexicographical research can focus on individual lexemes, but morphosyntactic research requires a different approach, using more extensive corpora, certain language production elicitation methods, and/or introspection. For discourse related research into turn-taking, on the other hand, a researcher would need to videotape dialogues or multi-party meetings. Even within one discipline, it is necessary to first decide on the research questions and then on which methodologies can be used
1024
IX. Handling sign language data
to find answers. Since it is not possible to deal with all aspects of linguistic research in this chapter, we have decided to focus on data collection for the purpose of linguistic description. In general, (sign language) linguists claim to be using either qualitative or quantitative methodologies and regard these methodologies as two totally different (often incompatible) approaches. However, we prefer to talk about a continuum of research methodologies from qualitative to quantitative approaches rather than a dichotomy. At one end of the continuum, introspection can be found as the ultimate qualitative methodology (see section 2); at the other end, experimentation as the typically quantitative one is situated. To the best of our knowledge, the latter methodology has not been used in studies of linguistic description of sign languages. In between, there are mainly methods of observation and focused description on the basis of systematic elicitation (see section 3). Next, and newer to the field of sign language research, there are corpus-based studies where a (relatively) large corpus is mined for examples of structures and co-occurrences of items that then constitute the data for analysis (see section 4). When designing a study, it is also very important to think about the selection of the informants (see section 5) and to take into account the more technical aspects of data collection (see section 6).
2. Introspection 2.1. Value and limitations According to Larsen-Freeman and Long (1991, 15) “[p]erhaps the ultimate qualitative study is an introspective one” in which subjects (often the researchers themselves) examine their own linguistic behaviour. In linguistics (including sign language linguistics) this methodology has frequently been used for investigating grammaticality judgments by means of tapping the intuitions of the “ideal native speaker” (Chomsky 1965). Schütze (1996, 2) gives some reasons why such an approach can be useful: ⫺ Certain rare constructions are sometimes very hard to elicit and hardly ever occur in a corpus of texts. In this case, it is easier to present a native speaker with the construction studied and ask him/her about grammaticality and/or acceptability. ⫺ A corpus of texts or elicited data cannot give negative information, that is, those data cannot tell the researcher that a certain construction is ungrammatical and/or unacceptable. ⫺ Through tapping a native speaker’s intuitions, performance problems in spontaneous speech, such as slips of the tongue or incomplete utterances, can be weeded out. At the same time, Schütze (1996, 3⫺6) acknowledges that introspection as a methodology has also attracted a great deal of criticism: ⫺ Since the elicitation situation is artificial, an informant’s behaviour can be entirely different from what s/he would normally do in everyday conversation.
42. Data collection
1025
⫺ Linguistic elicitation as it has been done in the past few decades does not follow the procedures of psychological experimentation since the data gathering has been too informal. Sometimes researchers only use their own intuitions as data, but in Labov’s terms: “Linguists cannot continue to produce theory and data at the same time” (1972, 199). Moreover “[b]eing a native speaker doesn’t confer papal infallibility on one’s intuitive judgments” (Raven McDavid, quoted in Paikeday 1985). ⫺ Basically, grammaticality judgments are another type of performance. Although they supposedly tap linguistic competence, linguistic intuitions “are derived and rather artificial psycholinguistic phenomena which develop late in language acquisition […] and are very dependent on explicit teaching and instruction” (Levelt et al. 1977, in Schütze 1996). The last remark in particular is highly relevant for sign language linguistics, since many, if not most, native signers will not have received any explicit teaching and instruction in their own sign language when they were at school. This fact in combination with the scarcity or complete lack of codification of many sign languages and the atypical acquisition process of sign languages in many communities (which results in a wide variety of competencies in these communities) raises the question to what extent it is possible to tap the linguistic intuitions of native signers in depth (see also section 5 on informant selection). Schütze himself proposes a modified approach in order to answer the above criticism. The central idea of his proposal is that one should investigate not only a single native speaker’s linguistic intuitions, but rather those of a group of native speakers: I argue […] that there is much to be gained by applying the experimental methodology of social science to the gathering of grammaticality judgments, and that in the absence of such practices our data might well be suspect. Eliminating or controlling for confounding factors requires us to have some idea of what those factors might be, and such an understanding can only be gained by systematic study of the judgment process. Finally, I argue that by studying interspeaker variation rather than ignoring it (by treating only the majority dialect or one’s own idiolect), one uncovers interesting facts. (Schütze 1996, 9)
Clearly, caution remains necessary. Pateman (1987, 100), for instance, argues that “it is clear and admitted that intuitions of grammaticality are liable to all kinds of interference ‘on the way up’ to the level at which they are given as responses to questions. In particular, they are liable to interference from social judgments of linguistic acceptability”.
2.2. Techniques Various techniques have been used to tap an informant’s intuitions about the linguistic issue under scrutiny. Some of them will be discussed in what follows, but this is certainly not an exhaustive list. (i)
Error recognition and correction Error recognition is a fairly common task but has not been used all that frequently in sign language research. Here informants are presented with a number of utter-
1026
IX. Handling sign language data
ances and are asked to detect possible errors and to correct them if there are any. However, since many sign languages have not yet (or hardly) been codified, and since many sign language users have not been educated in their sign language, this may prove to be a difficult task for certain informants (see above). Therefore, caution is warranted here. (ii) Grammaticality judgments In this type of task, informants are presented with a number of utterances and are asked whether they would consider them grammatically correct and/or appropriate or not. If a negative reply is given, informants can be asked to correct the utterance as well. An example of this in sign language research is a task in which a participant who is presented with a number of classifier handshapes embedded in the same classifier construction is asked whether the particular classifier handshape is appropriate/acceptable in the context provided. An extension of this task would be to vary certain aspects of the execution of a sign (for instance, the handshape, the location, the rate, the manner, the nonmanual aspects, etc.) and to ask the informants what the consequences of the change(s) actually are (morphologically, semantically, etc.) rather than just asking them whether the modified production would still be grammatically correct and/ or acceptable. (iii) Semantic judgments Informants can be asked what the exact meaning of a certain lexeme is and in which contexts it would typically occur or in which contexts and situations it would be considered appropriate. In sign language research, informants can also be asked whether a certain manual production would be considered a lexical, conventional sign, or whether it is rather a polycomponential construction. (iv) Other judgment tasks Informants can also be asked to evaluate whether certain utterances, lexemes, etc. are appropriate for a given discourse situation (for instance, with respect to politeness, style, genre, and/or register). Furthermore, they could be asked to introspect on the speech act force of a certain utterance. To our knowledge, this type of task has not been used all that frequently in sign language research. What has been done quite frequently in sign language research though, is checking back with informants by asking them to introspect on their own productions of certain elicited data and/or by asking (a group of) native signers to introspect on certain elicited data (and/or the researchers’ analyses) (see also section 3.3).
3. Data elicitation In the first part of this section, some examples of tasks which can be used for data elicitation will briefly be explained. A number of these have been used quite extensively by various researchers investigating different sign languages while others have been used far less frequently (cf. Hong et al. 2009). The discussion proceeds from tasks with less control exerted by the researcher to tasks with more control. The second part discusses certain decisions with respect to data collection and the impact these decisions can have on the results obtained. Finally, an integrated approach, in which various methodologies are used in sequence, is presented.
42. Data collection
1027
3.1. Elicitation techniques and materials 3.1.1. Recording natural language use in its context On the “tasks with less ⫺ or no ⫺ control” end of the continuum, one finds the recording of traditional narratives in their appropriate context. One could, for example, videotape the after-dinner speech of the Deaf club’s president at the New Year’s Eve party as an example of a quasi-literal oratorical style involving the appropriate adjustments for a large room and the formality of the occasion. A priest’s sermon would be a similar example. In studies of language acquisition, videotaping bathtime play is a good way to collect data from parent-child interaction (Slobin/Hoiting/Frishberg, personal communication). In this context, it is very important for the researcher to be aware of the “Observer’s Paradox”, as first mentioned by Labov in the late 1960s (e.g. Labov 1969, 1972). Labov argues that even if the observer is very careful not to influence the linguistic activity, the mere presence of an observer will have an impact on the participants, who are likely to produce utterances in a manner different from when the observer is not present.
3.1.2. Free and guided composition In free composition, the researcher merely provides the informant with a topic and asks him/her to talk about that topic. Again, there is little control although it is possible to use this task to elicit particular structures. An obvious example is to ask an informant about his/her past experiences, in order to get past time references in the data. An example of guided composition that has been used in sign language research, is when informants are asked to draw their own family tree and to talk about family relations in order to elicit kinship terms.
3.1.3. Role play and simulation games Role play and simulation games are tasks which can also easily be used to elicit particular grammatical structures. If one informant is told to assume the role of interviewer and another is the interviewee (a famous athlete, for instance), the elicited data are expected to contain many questions. Creative researchers can certainly invent other types of role play yielding different grammatical structures. Simulation games are usually played on a larger scale (with more participants), but are less well-defined in that the players only get a prompt and have to improvise as the conversation progresses (for example, they have to simulate a meeting of the board of a Deaf club, a family birthday party). As such, the researcher does not have a lot of control, but this procedure can nevertheless yield focused data (to look at turntaking or register variation, for instance).
3.1.4. Communication games Communication games have also been used in sign language research to elicit production data. An example is a game played by two people who are asked to look at
1028
IX. Handling sign language data
drawings which contain a number of (sometimes subtle) differences. The players cannot see each other’s drawings and have to try to detect what exactly those differences are by asking questions. Other possibilities include popular guessing games like I spy … or a game played between a group of people in which one participant thinks of a famous person and the others have to guess the identity of this person by asking yes/ no questions (and other variants of this game).
3.1.5. Story retelling In sign language research, story retelling is commonly used for data elicitation. Here we present four forms of story telling: (i) picture story retelling, (ii) film story retelling, (iii) the retelling of written stories, and (iv) the retelling of signed stories. (i) Picture story retelling In some picture story elicitation tasks, informants are presented with a picture story made up of drawings and are asked to describe the depicted events. Normally such picture stories do not contain any type of linguistic information, that is, there is no written language accompanying the pictures. The following stories have been quite widely used in sign language research: The Horse Story The Horse Story (Hickmann 2003) was originally used in spoken language acquisition research but has also been used to elicit sign language data from adult signers, especially but not exclusively with a view to crosslinguistic comparison, as well as in research on ‘homesign’ (see chapter 26). It is a rather short picture story made up of five drawings about a horse that wants to jump over a fence in order to be with a cow in an adjacent meadow. However, the horse hits the fence, hurts its leg, and falls down. A little bird has witnessed the scene and flies off to get a first-aid kit. This is then used by the cow to bandage up the horse’s leg. The Snowman A longer picture story with a longer history of being used for the elicitation of sign language data is The Snowman, a children’s book by Raymond Briggs, first published in 1978 and turned into an animated film in 1982. The story is about a boy who makes a snowman that comes to life the following night. A large part of the story deals with the boy showing the snowman appliances, toys, and other bric-a-brac in the boy’s house, while they are trying to keep very quiet so as not to wake up the boy’s parents. Then the boy and the snowman set out on a flight over the boy’s town, over houses and large buildings, before arriving at the sea. While looking at the sea, the sun starts to rise and they have to return home. The next morning, the boy wakes up to find the snowman melted. This is the story as it appears in the book; the film has additional parts, including a snowmen’s party and a meeting with Father Christmas and his reindeer.
42. Data collection
1029
Frog, where are you Another wordless picture story often used to elicit sign (and spoken) language narratives is Frog, Where Are You by Mercer Mayer, published in 1969. This story is about a boy who keeps a frog captured in a jar. One night, however, the frog escapes and the boy, accompanied by his dog, goes looking for it in the woods. Before finding the frog, they go through various adventures. (ii) Film story retelling Next to narratives elicited by means of drawings, there is also film story retelling: informants are shown animated cartoons or (part of) a film and are asked to re-tell what they have just seen. Usually, cartoons or films used to elicit sign language narratives contain no or little spoken or written language. Examples include clips taken from The Simpsons, Wallace and Gromit, The Pink Panther, and Tweety Bird & Sylvester cartoons as well as short episodes from Die Sendung mit der Maus, a German children’s television series featuring a large personified mouse and a smaller personified elephant as the main protagonists. All of these animated cartoons were produced to be shown on television. There are also films that were made specifically for use in linguistic research. A well-known example is The Pear Story, a six-minute film developed by Wallace Chafe and his team in the mid-1970s to elicit narratives from speakers around the world (Chafe 1980). The film shows a man harvesting pears, which are stolen by a boy on a bike. The boy has some other adventures with other children, before the farmer discovers that his pears are missing. The film includes sound effects but no words. The Pear Story has also been used in sign language research. (iii) Retelling of written stories There are some examples of signed narratives elicited by means of written stories. In the context of Case Study 4: Sign Languages of the ECHO (European Cultural Heritage Online) project, for example, stories from Aesop’s Fables in written English, Swedish, and Dutch were used to elicit narratives in British Sign Language (BSL), Swedish Sign Language (SSL), and Sign Language of the Netherlands (NGT). Working with this type of translated texts can have two major drawbacks. First, the (morpho)syntax of the target language may be influenced by the source language, and second, one needs to make sure that informants have a (near-)native proficiency in both languages. At the same time, however, working with parallel corpora of translated texts can be interesting for other purposes, e.g. for translation studies. (iv) Retelling of signed stories Some of the NGT-fables mentioned above were used as elicitation materials during more recent NGT-data collection sessions: signers were shown the signed fables and asked to retell them (Crasborn/Zwitserlood/Ros 2008). These signed stories can then again be used for analysis towards linguistic description.
3.1.6. Video clip description In the 1970s, Supalla created elicitation materials designed to elicit polycomponential verbs of motion and location (Supalla 1982). The original materials, known as the Verbs
1030
IX. Handling sign language data
of Motion Production Test (VMP), include some 120 very short video clips showing objects moving in specific ways. Informants (American deaf children in the original Supalla (1982) study) are asked to watch the animated scenes and to describe the movement of the object shown in the clip. The VMP task can easily be used to study verbs of motion and location in other sign languages and/or produced by other groups of informants, although Schembri (2001, 156) notes that the task may be of less use with signers from non-Western cultures because the objects include items that may not be familiar to members of these cultures. There is a shorter version of the VMP task which consists of 80 coded items and five practice items. This version is included as one of twelve tasks in the Test Battery for American Sign Language Morphology and Syntax (Supalla et al., no date). Both the longer and the short version of the VMP task have been used in a number of studies on different sign languages and are still used today, for example, in the context of some of the corpus projects discussed in section 4. A somewhat comparable set of stimuli are the ECOM clips from the Max Planck Institute for Psycholinguistics (Nijmegen): 74 animations showing geometrical entities that move and interact. These have also been used in sign language research, mainly to study classifier constructions. A set of stimuli consisting of 66 videotaped skits of approximately 3⫺10 seconds depicting real-life actors performing and undergoing certain actions was used to study indexicality of singular versus plural verbs in American Sign Language (Cormier 2002).
3.1.7. Picture description Picture description may take the form of a question-and-answer session. Participants are asked to look at a picture or a series of pictures (or drawings) and then answer questions designed to elicit particular structures under study. This is a fairly common procedure in lexicographical research, but has also been used to target certain grammatical patterns. In such a question-and-answer session, there is linguistic interaction between the informant and at least one other person. In another task involving picture description, the signer describes a specific picture to an interlocutor who subsequently has to select the correct picture (i.e. the picture described by the signer) from a series of (almost identical) pictures or drawings. This elicitation procedure is often used to elicit specific forms or structures, e.g. plural forms, locative constructions, or classifier constructions. A well-known example in sign language research is the, by now classical, study on word order in Italian Sign Language, for which Volterra et al. (1984) designed elicitation materials. Since then, these materials have been used for the analysis of constituent order in declarative sentences in a number of other sign languages (Johnston et al. 2007). In the Volterra et al. task, eighteen pairs of drawings with only one contrastive element (e.g. ‘A cat is under a chair’ versus ‘A cat is on a chair’) are used to elicit sentences describing three distinct states of affairs: six locative states of affairs (e.g. ‘The tree is behind/in front of the house’), six non-reversible states of affairs (e.g. ‘The boy/girl eats a piece of cake’), and six reversible states of affairs (e.g. ‘The car is towing the truck/The truck is towing the car’). The person videotaped is a signer who has the drawings before him/her, and for each pair, one of the drawings is marked with an arrow. The interlocutor, another signer who is not being videotaped, has the same
42. Data collection
1031
drawings, but without arrows. The first signer is asked to sign one sentence describing the drawing marked with the arrow; the interlocutor is asked to indicate which of the two drawings of each pair is being described. The main purpose of studies using this elicitation task has been to analyse whether the sign language under investigation exhibits systematic ordering of constituents in declarative utterances that contain two arguments, and if this is the case, to determine the patterns that occur. A variant of the Volterra et al. task makes use of elicitation materials that consist of sets of pictures, e.g. four pictures, with only one deviant picture. The signer is asked to describe the picture that is different. This task may, for example, be used to elicit negative constructions, when the relevant picture differs from the others in that there is something missing.
3.1.8. Elicited translation In elicited translation, the researcher provides the informant with an isolated utterance in one language (usually the surrounding spoken language, but it could also be another sign language) and asks the informant to translate the utterance into his/her own (sign) language. This procedure has been widely used in sign language research, especially in its early days, but has more recently been regarded with suspicion as it is faced with the risk of interference from the source language onto the target language. Consequently, (mostly morphosyntactic) linguistic descriptions of target sign language structures elicited by means of this method may be less valid. A slightly less controlled form of elicited translation consists in presenting informants with verbs in a written language and asking them to produce a complete signed utterance containing the same verb. In order to further minimize possible interference from the written language, these utterances can subsequently be shown to another informant who is asked to copy the utterance. It is the final utterance which is then used for the analysis. This would be an example of elicited imitation (see next sub-section).
3.1.9. Elicited imitation In elicited imitation, the researcher produces an utterance containing a certain linguistic structure and asks the informant to repeat what s/he has just produced. If the utterance is long enough, the informant will not be able to rely on working memory, but will have to rely on semantic and syntactic knowledge of the language. To our knowledge, this procedure has not been used in sign language research yet, but it could yield interesting results when executed correctly. One could imagine that this procedure might be used to study non-manuals, for instance.
3.1.10. Completion task In a manner fairly similar to the previous one, informants are asked to complete an utterance started by the researcher. This type of task can be used to study plural
1032
IX. Handling sign language data
formation, for instance. The researcher signs something like “I have one daughter, but John has …” (three daughters). As far as we know, this technique has only rarely been used in sign language research.
3.1.11. Structured exercises In structured exercises, informants are asked to produce certain sentence structures in a predetermined manner. Informants can be presented with two clauses, for instance, and asked to turn them into one complex sentence (e.g. by means of embedding one of the clauses as a relative clause into the other), or can be asked to turn a positive utterance into a negative one. Again, this technique has been used in sign language research, but certainly not on a large scale.
3.2. Data selection and impact on results The selection of data can, of course, have a major impact on research results. When examining the degree of similarity across the grammars of different sign languages, for instance, looking at elicited utterances produced in isolation may lead to a conclusion which is very different from the overall picture one would get when comparing narratives resulting from picture story descriptions. The latter type of data contains many instances where the signer decides to “tell by showing” (“dire en montrant”; Cuxac 2000), and it seems likely that the resulting prominence of visual imagery in the narratives ⫺ among other issues ⫺ yields more similarity across sign languages (see, for instance, Vermeerbergen (2006) for a more comprehensive account). In general, the strategy of ‘telling by showing’ is (far) less present in isolated declarative sentences, and it is in these constructions where we find more differences between different sign languages (Van Herreweghe/Vermeerbergen 2008). The nature of the data one works with might also influence one’s opinion when it comes to deciding on how to approach the analysis of a sign language. Does one opt for a more ‘oral language compatible view’ or rather decide on a ‘sign language differential view’? On the one hand, there is the oral language compatibility view. This presupposes that most of SL structure is in principle compatible with ordinary linguistic concepts. On the other hand, there is the SL differential view. This is based on the hypothesis that SL is so unique in structure that its description should not be primarily modelled on oral language analogies. (Karlsson 1984, 149 f.)
Simplifying somewhat, it could be argued that the question whether ‘spoken language tools’, that is, theories, categories, terminology, etc. developed and used in spoken language research, are appropriate and/or sufficient for the analysis and description of sign languages will receive different answers depending on whether one analyzes the signed production of a deaf comedian or a corpus consisting of single sentences translated from a spoken language into a sign language. A similar observation can be made with regard to the relationship between the choice of data and the issue of sign lan-
42. Data collection
1033
guages as homogeneous systems or as basically heterogeneous systems in which meanings are conveyed using a combination of elements, including linguistic elements but also components traditionally regarded as not being linguistic in nature.
3.3. An integrated approach When it comes to studying a certain aspect of the linguistic structure of a sign language, we would like to maintain that there is much to be gained from approaching the study by using a combination of the above-mentioned methodologies and techniques. An example of such an integrated line of research for the study of negatives and interrogatives in a particular sign language might include the following steps: Step 1: Making an inventory of the instances of negatives and interrogatives in a previously collected and transcribed corpus of monologues and dialogues in the sign language studied. Step 2: Eliciting more focused production data in which negatives and interrogatives can be expected, such as a role play between signers in which one informant takes the role of an interviewer (asking questions) and the other of the interviewee (giving negative replies), or by means of communication games. Step 3: Transcribing the data collected in step 2 and making an inventory of the negatives and interrogatives, followed by an analysis of these occurrences. Step 4: Checking the analysis against the intuitions of a group of (near-)native signers by means of introspection. Step 5: Designing a more controlled judgment study in which one group is confronted with (what the researchers think are) correct negative and interrogative constructions and another with (what the researchers think are) incorrect negatives and interrogatives. Step 6: Proposing a description of the characteristic properties of negatives and interrogatives in the sign language under scrutiny.
4. Sign language corpus projects 4.1. Why corpus linguistics? Corpus linguistics is a fairly new branch of linguistic research which goes hand in hand with the possibilities offered by more and more advanced computer technology. In the past, any set of data on which a linguistic analysis was performed was called a ‘corpus’. However, with the advent of computer technology and corpus-based linguistics, use of the term ‘corpus’ has become more and more restricted to any type of collection of texts in a machine-readable form. Johnston (2009, 18) argues: “Corpus linguistics is based on the assumption that processing large amounts of annotated texts can reveal patterns of language use and structure not available to lay user intuitions or even to
1034
IX. Handling sign language data
expert detailed linguistic analyses of particular texts.” In corpus linguistics, “quantitative analysis goes hand in hand with qualitative analysis” (Leech 2000, 49) since [e]mpirical linguists are interested in the actual phenomena of language, in the recordings of spoken and written texts. They apply a bottom-up procedure: from the analysis of individual citations, they infer generalizations that lead them to the formulation of abstractions. The categories they design help them understand differences: different text types, syntactic oppositions, variations of style, shades of meaning, etc. Their goal is to collect and shape the linguistic knowledge needed to make a text understandable. (Mahlberg 1996, iv)
The same obviously holds for sign language corpora. However, since they contain faceto-face interaction, they are more comparable to spoken language corpora than to written language corpora, and according to Leech (2000, 57), [t]here are two different ways of designing a spoken corpus in order to achieve ‘representativeness’. One is to select recordings of speech to represent the various activity types, contexts, and genres into which spoken discourse can be classified. This may be called genre-based sampling. A second method is to sample across the population of the speech community one wishes to represent, in terms of sampling across variables such as region, gender, age, and socio-economic group, so as to represent a balanced cross-section of the population of the relevant speech community. This may be called a demographic sampling.
In sign language corpora, it is especially the latter type of sampling that has been done so far. Moreover, sign language corpora are similar to spoken language corpora (and not so much to written language corpora) since they are only machine-readable when transcriptions and annotations are included (for the transcription of sign language data, we refer the reader to chapter 43).
4.2. Sign language corpora In sign language linguistics, corpus (at least in its more restricted sense of machinereadable corpus) linguistics is still in its infancy, although rapidly growing. Johnston (2008, 82) expresses the need for sign language corpora as follows: Signed language corpora will vastly improve peer review of descriptions of signed languages and make possible, for the first time, a corpus-based approach to signed language analysis. Corpora are important for the testing of language hypotheses in all language research at all levels, from phonology through to discourse […]. This is especially true of deaf signing communities which are also inevitably young minority language communities. Although introspection and observation can help develop hypotheses regarding language use and structure, because signed languages lack written forms and well developed community-wide standards, and have interrupted transmission and few native speakers, intuitions and researcher observations may fail in the absence of clear native signer consensus of phonological or grammatical typicality, markedness or acceptability. The past reliance on the intuitions of very few informants and isolated textual examples (which have remained essentially inaccessible to peer review) has been problematic in the field. Research into signed languages has grown dramatically over the past three to four decades but progress in the field has been hindered by the resulting obstacles to data sharing and processing.
42. Data collection
1035
One of the first (if not the first) large-scale sign language corpus projects is the corpus of American Sign Language (ASL) collected by Ceil Lucas, Robert Bayley, and their team (see, for instance, Lucas/Bayley/Valli 2001). In the course of 1995, they collected data in seven cities in the United States that were considered to be representative of the major areas of the country: Staunton, Virginia; Frederick, Maryland; Boston, Massachusetts; Olathe, Kansas/Kansas City, Missouri; New Orleans, Louisiana; Fremont, California; and Bellingham, Washington. All of these cities have thriving communities of ASL users and some also residential schools for deaf children and as such long-established Deaf communities. 207 African-American and white working and middle-class men and women participated in the project. They could be divided into three age groups: 15⫺25, 26⫺54, and 55 and up. All had either acquired ASL natively at home or had learned to sign in residential schools before the age of 5 or 6 (see Lucas/Bayley/Valli 2001). For each site, at least one contact person was asked to identify fluent, lifelong ASL users who had to have lived in the community for at least ten years. The contact persons, deaf themselves and living in the neighborhood, assembled groups of two to seven signers. At the sites where both white and African-American signers were interviewed, two contact persons were appointed, one for each community. All the data were collected in videotaped sessions that consisted of three parts. In the first part of each session, approximately one hour of free conversation among the members of each group was videotaped, without any of the researchers being present. In a second part, two participants were selected and interviewed in depth by the deaf researchers. The interviews included topics such as background, social network, and patterns of language use. Finally, 34 pictures were shown to the signers to elicit signs for the objects or actions represented in the pictures. It was considered to be very important not to have any hearing researcher present in any of the sessions: “It has been demonstrated that ASL signers tend to be very sensitive to the audiological and ethnic status of an interviewer […]. This sensitivity may be manifested by rapid switching from ASL to Signed English or contact signing in the presence of a hearing person.” (Lucas/Bayley 2005, 48). Moreover, the African-American participants were interviewed by a deaf African-American research assistant, and during the group sessions with African-American participants, no white researchers were present. In total, data from 62 groups were collected at community centers, at schools for deaf children, in private homes, and at a public park. At the same time, a cataloguing system and a computer database were developed to also collect and store metadata, that is, details as to when and where each group was interviewed and personal information (name, age, educational background, occupation, pattern of language use, etc.). Furthermore, the database also contained details about phonological, lexical, morphological, and syntactic variation, and further observations about other linguistic features of ASL that are not necessarily related to variation. The analysis of this corpus has led to numerous publications about sociolinguistic variation in ASL (see chapter 33 on sociolinguistic variation). Since this substantial ASL corpus project, for which the data were collected in 1995, sign language corpus projects have been initiated in other countries as well, including Australia, Ireland, The Netherlands, the United Kingdom, Germany, China (Hong Kong), Italy, Sweden, and France, and more are planned in other places. Some of these corpus projects also focus on sociolinguistic variation, but most have multiple goals, and the data to be obtained cannot only be used as data for linguistic description,
1036
IX. Handling sign language data
but also for the preservation of older sign language data for future research (i.e. the documentation of diachronic change) or as authentic materials to be used in sign language teaching. The reader can find up to date information with respect to these (and new) corpus projects at the following website: http://www.signlanguagecorpora.org.
4.3. Metadata When collecting a corpus it is of the utmost importance to also collect and store metadata related to the linguistic data gathered. In many recent sign language corpus projects, the IMDI metadata database is being used, an already existing database which has been further developed in the context of the ECHO project at the Max Planck Institute for Psycholinguistics in Nijmegen (The Netherlands) (Crasborn/Hanke 2003; also see www.mpi.nl/IMDI/). This approach is being increasingly used in smaller research projects as well. A good example is presented in Costello, Fernández, and Landa (2008, 84⫺85): We video-record our informants in various situations and contexts, such as spontaneous conversations, controlled interviews and elicitation from stimulus material. Each recording session is logged in the IMDI database to ensure that all the related metadata are recorded. The metadata relate to the informant, for example: ⫺ ⫺ ⫺ ⫺ ⫺ ⫺
age, place of birth and sex hearing status, parents’ hearing status, type of hearing aid used (if any) age of exposure to sign language place and context of sign language exposure primary language of communication within the family schooling (age, educational program, type of school)
and also to the specific context of the recording session, such as: ⫺ ⫺ ⫺ ⫺
type of communicative act (dialogue, storytelling, question and answer) degree of formality place and social context topic of the content.
Another important piece of information to include in the metadata is birth order of the informant and hearing status of siblings, if any. There are, for instance, clear differences between the youngest/oldest deaf person in a family with hearing parents and three older/younger deaf siblings and the youngest/oldest deaf person in a family with hearing parents and three older/younger hearing siblings.
5. Informant selection Not all users of a specific language show the same level of language competence. This is probably true of all language communities and of all languages, but it is even more true of sign language communities. This is, of course, related to the fact that across the world, 90 to 95 percent (or more, cf. Johnston 2004) of deaf children are born to hearing parents, who are very unlikely to know the local sign language. Most often
42. Data collection
1037
deaf children only start acquiring a sign language when they start going to a deaf school. This may be early in life, but it may also be (very) late or even never, either because the deaf child’s parents opt for a strictly oral education with no contact with a sign language or because the child does not go to school at all. Consequently, only a small minority of signers can be labelled “mother tongue speaker” in the strict sense of the word, and in most cases, these native signers’ signing parents will not be/have been native signers themselves. When deaf parents are late learners of a sign language, for instance, when they did not learn to sign until they were in their teens, this may be reflected in their sign language skills, which may in turn have an effect on their children’s sign language production. In spoken language research, especially in the case of research on the linguistic structure of a given language, the object of study is considered to be present in its most natural state in the language production of a native speaker (but see section 2 above). When studying form and function of a specific grammatical mechanism or structure in a spoken language, it would indeed be very unusual to analyse the language production of non-native speakers and/or to ask non-native speakers to provide grammaticality judgments. The importance of native data has also been maintained for sign language research, but, as stated by Costello, Fernández, and Landa (2008, 78), “there is no single agreed-upon definition of native signer, and frequently no explanation at all is given when the term is used”. The “safest option model of native signers” (Costello/ Fernández/Landa 2008, 79) is the informant who is (at least) a second generation deafof-deaf signer. However, in small Deaf communities, such ideal informants may be very few in number. For example, Costello et al. themselves claim that they have not managed to find even seven second-generation signers in the sign language community of the Basque Country, a community estimated to include around 5,100 people. Johnston (2004, 370 f.) mentions attempts to locate deaf children of deaf parents under the age of nine and claims that it was not possible to locate more than 50 across Australia. Especially in small communities where there is merely a handful of (possibly) native signers, researchers may be forced to go for the second best and decide to stipulate a number of criteria which informants who are not native signers must meet. Such criteria often include: ⫺ early onset of sign language acquisition; often the age of three is mentioned here, but sometimes also six or seven; ⫺ education in a school for the deaf, sometimes stipulating that this should be a residential school; ⫺ daily use of the sign language under investigation (e.g. with a deaf signing partner and/or in a deaf working environment); ⫺ prolonged membership of the Deaf community. Note that it may actually be advisable to apply these criteria to native signers as well. At the same time, we would like to make two final comments: (1) In any community of sign language users, small or large, there are many more nonnative signers than native signers. This means that native signers most often have non-native signers as their communication partners and this may affect their intuitions about language use. It may well be that a certain structure is over-used by
1038
IX. Handling sign language data non-native signers so that that structure is seen as “typical” of or “normal” for the language, although it is not very prominent in the language production of native signers. One can even imagine that a structure (e.g. a certain constituent order) which results from the influence of the spoken majority language and is frequently used by non-native signers is characterized as “acceptable” by native signers even though the latter would not use this structure themselves, at least not when signing to another native language user.
(2) If one wants to get an insight into the mechanisms of specific language practices within a certain sign language community (e.g. to train the receptive language skills of sign language interpreter students), it might be desirable in certain sign language communities not to restrict the linguistic analysis to the language use of thirdgeneration native signers. Because non-native signers make up the vast majority of the language community, native signers are not necessarily “typical” representatives of that community. Natural languages are known to show (sociolinguistic) variation. It seems that for sign languages, region and age are among the most important determining factors, although we feel it is safe to say that in most, if not all, sign languages the extent and nature of variation is not yet fully understood. Thus, variation is another issue that needs to be taken into account when selecting informants. Concerning regional variation in the lexicon of Flemish Sign Language (VGT), for example, research has shown that there are five variants, with the three most centrally located areas having more signs in common, compared to the two more peripheral provinces. Also, there seems to be an ongoing spontaneous standardization process with the most central regions “leading the dance” (Van Herreweghe/Vermeerbergen 2009). Therefore, in order to study a specific linguistic structure or mechanism in VGT, it is best to include data from all different regions. Whenever that is not possible, it is important to be very specific about the regional background of the informants because it may well be the case that the results of the analysis are valid for one region but not for another. Finally, we would like to stress the necessity of taking into account the anthropological and socio-cultural dimensions of the community the informants belong to. When working with deaf informants, researchers need to be sensitive to the specific values and traditions of Deaf culture, which may at times be different from those of the surrounding culture. Furthermore, when the informants belong to a Deaf community set within a mainstream community that the researcher is not a member of, this may raise other issues that need to be taken into consideration (e.g. when selecting elicitation materials). A discussion of these and related complications, however, is beyond the scope of this chapter.
6. Video-recording data 6.1. Recording conditions Research on sign languages shares many methodological issues with research on spoken languages but it also comprises issues of its own. The fact that data cannot be audio-recorded but need to be video-recorded is one of these sign language specific
42. Data collection
1039
challenges. Especially when recording data to study the structure of the language, but also when it comes to issues such as sociolinguistic research on variation, one of the major decisions a researcher needs to make is whether to opt for high quality recording or rather to try to minimize the impact of the data collection setting on the language production of the informants. It is a well-known fact that language users are influenced by the formality of the setting. Different situations may result in variations in style and register in the language production. This is equally true for speakers and signers, but in the latter group, the specific relationship between the sign language and the spoken language of the surrounding hearing community is an additional factor that needs to be taken into account. In many countries, sign languages are not yet seen as equal to spoken languages, but even if a sign language is recognized as a fully-fledged natural language, it is still a minority language used by a small group of language users surrounded by a much larger group of majority language speakers. As a result, in many Deaf communities, increased formality often results in increased influence from the spoken language (Deuchar 1984). A problem related to this issue is the tendency to accommodate to the (hearing) interlocutor. This is often done by including a maximum of characteristics from the structure of the spoken language and/or by using structures and mechanisms that are supposedly more easily understood by people with poor(er) signing skills. For example, when a Flemish signer is engaged in the Volterra et al. elicitation task (see section 3.1.7) and needs to describe a picture of a tree in front of a house, s/he may decide to start the sentence with the two-handed lexical sign house followed by the sign tree and a simultaneous combination of a ‘fragment buoy’ (Liddell 2003) referring to house on the non-dominant hand and a ‘classifier’ referring to the tree on the dominant hand, thereby representing the actual spatial arrangement of the referents involved by the spatial arrangement of both hands. Alternatively, s/he might describe the same picture using the three lexical signs tree C in-front-of C house in sequence, that is, in the sequential arrangement familiar to speakers of Dutch. In both cases, the result is a grammatically correct sentence in VGT, but whereas the first sentence involves sign language specific mechanisms, namely (manual) simultaneity and the use of space to express the spatial relationship between the two referents, the same is not true for the second sentence, where the relationship is expressed through the use of a preposition sign and word order, exactly as in the Dutch equivalent De boom staat voor het huis (‘the tree is in front of the house’). One way to overcome this problem in an empirical setting is by engaging native signers to act as conversational partners. However, because of the already mentioned specific relationship between a sign language and the majority spoken language, signers may still feel that they should use a more ‘spoken language compatible’ form of signing in a formal setting (also see the discussion of the ‘Observer’s Paradox’ in section 3.1). Because of such issues, researchers may try and make the recording situation as informal and natural as possible. Ways of doing this include: ⫺ organising the data collection in a place familiar to the signer (e.g. at home or in the local Deaf club); ⫺ providing a deaf conversational partner: This can be someone unknown to the signer (e.g. a deaf researcher or research assistant, a deaf student), although the presence of a stranger (especially if it is a highly educated person) may in itself
1040
IX. Handling sign language data have an impact on the language production of the informant. It may therefore be better to work with an interlocutor the signer knows, but at the same time, it should not be an interlocutor the signer is too closely related with (e.g. husband/wife or sibling) because this may result in a specific language use (known as ‘within-thefamily-jargon’) which may not be representative of the language use in the larger linguistic community; ⫺ avoiding the presence of hearing people whenever possible; ⫺ only using one (small-size) camera and avoiding the use of additional recording equipment or lights; ⫺ not using the first ten minutes of what has been videotaped; these first ten minutes can be devoted to general conversation to make sure that the signer is at ease and gradually forgets the presence of the camera.
6.2. Technical issues In certain circumstances, for instance when compiling a corpus for pedagogical reasons, researchers may opt for maximal technical quality when recording sign language data. Factors that are known to increase the quality of a recording include the following: ⫺ Clothing: White signers preferably wear dark, plain clothes and black signers light, plain clothes to make sure there is enough contrast between the hands and the background when signs are produced on or in front of the torso. Jewellery can be distracting. If the informant usually wears glasses, it may be necessary to ask him/ her to take off the glasses in order to maximize the visibility of the non-manual activity (obviously, this is only possible when interaction with an interlocutor is not required). ⫺ Background: The background can also influence the visibility of the signed utterances. Consequently, a simple, unpatterned background is a prerequisite, and frequently, a certain shade of blue or green is used. This is related to the use of the chroma key (a.k.a. bluescreen or greenscreen) technique, where two images are being mixed. The informant is recorded in front of a blue or a green background which is later replaced by another image so that the informant seems to be standing in front of the other background. If there is no intention to apply this technique, then there is no need for a blue or green background, simply “unpatterned” is good enough. However, visual distraction in the form of objects present in the signer’s vicinity should be avoided. ⫺ Posture: When a signer sits down, this may result in a different dimension of the signing space as compared to the same signer standing upright (and this may be a very important factor in phonetic or phonological research, for instance). ⫺ Lighting: There clearly needs to be enough foreground lighting. Light sources behind the signer should be avoided as much as possible since it results in low visibility of facial expressions. The presence of shadows should be avoided as much as possible. ⫺ Multiple cameras: How many cameras are necessary, their position, and what they focus on will be determined by the specific research question(s); the analysis of non-manual activity, for example, requires the use of one camera zooming in on
42. Data collection
1041
the face of the informant(s) (although nowadays it is also possible to afterwards electronically zoom in on a selected area within the image). ⫺ Position of the camera(s) in relation to the signer(s): In order to fully capture the horizontal dimension of the signed production, some researchers avoid full frontal recording and prefer a slight angle. A top view facilitates the analysis of the relationship of the hands and the body, which may be important when studying the use of space. ⫺ Use of elicitation materials: The signer should not hold any papers or other things in his/her hands while signing and should not start to sign while (still) looking at the materials.
6.3. Issues of anonymity One major disadvantage of the necessity to video-record sign language production is related to the issue of anonymity. When presenting or publishing their work, researchers may wish to illustrate their findings with sequences or stills taken from the videorecorded data. However, not all signers like the idea of their face being shown to a larger public. In the age of online publishing, this problem becomes even more serious. Obviously, making the signer unrecognisable, for instance, by blurring his/her face ⫺ a strategy commonly used to anonymise video-taped speakers ⫺ is not an option because important non-manual information expressed on the face will be lost. It may therefore be necessary to make use of a model reproducing the examples for the purpose of dissemination. This solution may be relatively easy for individual signs or for constructions to be reproduced in isolation but may be problematic in the case of longer stretches of language production. The problem of privacy protection is, of course, also highly relevant in the case of on-line publication of sign language video recordings and annotations. This issue cannot be further dealt with here, but we would like to refer to Crasborn (2008), who discusses developments in internet publishing of sign language data and related copyright and privacy issues. The fact that sign language production needs to be video-recorded also has consequences in terms of research design. A well-known research design to study language attitudes is the “matched guise” technique developed by Lambert and colleagues (Lambert et al. 1960) to study attitudes towards English and French in Montreal, Canada. The visual nature of sign languages makes it difficult to apply this technique when studying sign language attitudes because it will soon be obvious that one and the same signer is producing two samples in two different languages or variants. Fenn (1992, in Burns/Matthews/Nolan-Conroy 2001, 189) attempted to overcome this by selecting physically similar signers, dressed in a similar fashion. However, he encountered another difficulty since many of his subjects recognized the signers presenting the language samples.
7. Conclusion In this chapter, we have attempted to give a brief survey of data collection techniques using different types of elicitation materials and using corpora. We have also focused
1042
IX. Handling sign language data
on the importance of deciding which type of data should be used for which type of analysis. Furthermore, we have discussed the problem of informant selection and some more technical aspects of video-recording the data. Throughout the chapter, we have focused on data collection in the sense of collecting sign language data. Sign language research may also involve other types of data collection, such as questioning signers on matters related to sign language use or (sign) language attitudes. In this context, too, the sociolinguistic reality of Deaf communities may require a specific approach. Matthews (1996, in Burns/Matthews/Nolan-Conroy 2001, 188) describes how he and his team, because of a very poor response from deaf informants on postal questionnaires, decided to travel around Ireland to meet with members of the Deaf community face to face. They outlined the aims and objectives of their study (using Irish Sign Language) and presented informants with the possibility to complete the questionnaire on the spot, giving them the opportunity to provide their responses in Irish Sign Language (which were later translated into written English in the questionnaires). Thanks to this procedure, response rates were much higher. Finally, we would also like to stress the need for including sufficient information on data collection and informants in publications in order to help the reader evaluate the research findings, discussion, and conclusions. It is quite customary to collect and provide metadata in the context of sociolinguistic research and it has become standard practice in the larger corpus projects as well, but we would like to encourage the collection of the above type of information for all linguistic studies, as we are convinced that this will vastly improve the comparability of studies dealing with different sign languages or sign language varieties.
8. Literature Briggs, Raymond 1978 The Snowman. London: Random House. Burns, Sarah/Matthews, Patrick/Nolan-Conroy, Evelyn 2001 Language Attitudes. In Lucas, Ceil (ed.), The Sociolinguistics of Sign Languages. Cambridge: Cambridge University Press, 181⫺216. Chafe, Wallace L. 1980 The Pear Stories: Cognitive, Cultural, and Linguistic Aspects of Narrative Production. Norwood, NJ: Ablex. Chomsky, Noam 1965 Aspects of the Theory of Syntax. Cambridge, MA: The MIT Press. Cormier, Kearsy 2002 Grammaticization of Indexic Signs: How American Sign Language Expresses Numerosity. PhD Dissertation, University of Texas at Austin. Costello, Brendan/Ferna´ndez, Javier/Landa, Alazne 2008 The Non-(existent) Native Signer: Sign Language Research in a Small Deaf Population. In: Quadros, Ronice M. de (ed.), Sign Languages: Spinning and Unraveling the Past, Present and Future. TISLR 9: Forty Five Papers and Three Posters from the 9 th Theoretical Issues in Sign Language Research Conference, Florianopolis, Brazil, December 2006. Petrópolis/RJ, Brazil: Editora Arara Azul, 77⫺94. [Available at: www.editora-araraazul.com.br/EstudosSurdos.php]
42. Data collection
1043
Crasborn, Onno 2008 Open Access to Sign Language Corpora. Paper Presented at the 3rd Workshop on the Representation and Processing of Sign Languages (LREC), Marrakech, Morocco, May 2008 [http://www.lrec-conf.org/proceedings/lrec2008, 33⫺38]. Crasborn, Onno/Hanke, Thomas 2003 Additions to the IMDI Metadata Set for Sign Language Corpora. Agreements at an ECHO workshop, May 2003, Nijmegen University. [Available at: http://www.let.kun.nl/ sign-lang/echo/docs/SignMetadata_May2003.doc] Crasborn, Onno/Zwitserlood, Inge/Ros, Johan 2008 Corpus NGT. An Open Access Digital Corpus of Movies with Annotations of Sign Language of the Netherlands. Centre for Language Studies, Radboud University Nijmegen. [Available at: http://www.ru.nl/corpusngt] Cuxac, Christian 2000 La Langue des Signes Française. Les Voies de l’Iconicité (Faits de Langues No 15⫺16). Paris: Ophrys. Deuchar, Margaret 1984 British Sign Language. London: Routledge & Kegan Paul. Hickmann, Maya 2003 Children’s Discourse: Person, Space and Time Across Languages. Cambridge: Cambridge University Press. Hong, Sung-Eun/Hanke, Thomas/König, Susanne/Konrad, Reiner/Langer, Gabriele/Rathmann, Christian 2009 Elicitation Materials and Their Use in Sign Language Linguistics. Poster Presented at the Sign Language Corpora: Linguistic Issues Workshop, London, July 2009. Johnston, Trevor 2004 W(h)ither the Deaf Community? Population, Genetics, and the Future of Australian Sign Language. In: American Annals of the Deaf 148(5), 358⫺375. Johnston, Trevor 2008 Corpus Linguistics and Signed Languages: No Lemmata, No Corpus. Paper Presented at the 3rd Workshop on the Representation and Processing of Sign Languages (LREC), Marrakech, Morocco, May 2008. [http://www.lrec-conf.org/proceedings/lrec2008/, 82⫺ 87] Johnston, Trevor 2009 The Reluctant Oracle: Annotating a Sign Language Corpus for Answers to Questions We Can’t Ask Any Other Way. Abstract of a Paper Presented at the Sign Language Corpora: Linguistic Issues Workshop, London, July 2009. Johnston, Trevor/Vermeerbergen, Myriam/Schembri, Adam/Leeson, Lorraine 2007 “Real Data Are Messy”: Considering Cross-linguistic Analysis of Constituent Ordering in Auslan, VGT, and ISL. In: Perniss, Pamela/Pfau, Roland/Steinbach, Markus (eds.), Visible Variation: Comparative Studies on Sign Language Structure. Berlin: Mouton de Gruyter, 163⫺205. Karlsson, Fred 1984 Structure and Iconicity in Sign Language. In: Loncke, Filip/Boyes-Braem, Penny/Lebrun, Yvan (eds.), Recent Research on European Sign Languages. Lisse: Swets and Zeitlinger, 149⫺155. Labov, William 1969 Contraction, Deletion, and Inherent Variability of the English Copula. In: Language 45, 715⫺762. Labov, William 1972 Sociolinguistic Patterns. Philadelphia, PA: University of Pennsylvania Press.
1044
IX. Handling sign language data
Lambert, Wallace, E./Hodgson, Richard C./Gardner, Robert C./Fillenbaum, Samuel 1960 Evaluational Reactions to Spoken Language. In: Journal of Abnormal and Social Psychology 60, 44⫺51. Larsen-Freeman, Diane/Long, Michael H. 1991 An Introduction to Second Language Acquisition Research. London: Longman. Leech, Geoffrey 2000 Same Grammar or Different Grammar? Contrasting Approaches to the Grammar of Spoken English Discourse. In: Sarangi, Srikant/Coulthard, Malcolm (eds.), Discourse and Social Life. Harlow: Pearson Education, 48⫺65. Liddell, Scott K. 2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press. Lucas, Ceil/Bayley, Robert 2005 Variation in ASL: The Role of Grammatical Function. In: Sign Language Studies 6(1), 38⫺75. Lucas, Ceil/Bayley, Robert/Valli, Clayton 2001 Sociolinguistic Variation in American Sign Language. Washington, DC: Gallaudet University Press. Mahlberg, Michaela 1996 Editorial. In: International Journal of Corpus Linguistics 1(1), iii⫺x. Mayer, Mercer 1969 Frog, Where Are You? New York: Dial Books for Young Readers. Paikeday, Thomas M. 1985 The Native Speaker Is Dead. Toronto: Paikeday Publishing Inc. Pateman, Trevor 1987 Language in Mind and Language in Society: Studies in Linguistic Reproduction. Oxford: Clarendon Press. Schembri, Adam 2001 Issues in the Analysis of Polycomponential Verbs in Australian Sign Language (Auslan). PhD Dissertation, University of Sydney, Australia. Schütze, Carson T. 1996 The Empirical Base of Linguistics. Grammaticality Judgments and Linguistic Methodology. Chicago: University of Chicago Press. Supalla, Ted 1982 Structure and Acquisition of Verbs of Motion and Location in American Sign Language. PhD Dissertation, University of California, San Diego. Supalla, Ted/Newport, Elissa/Singleton, Jenny/Supalla, Sam/Metlay, Don/Coulter, Geoffrey no date The Test Battery for American Sign Language Morphology and Syntax. Manuscript, University of Rochester. Van Herreweghe, Mieke/Vermeerbergen, Myriam 2008 Referent Tracking in Two Unrelated Sign Languages and in Home Sign Systems. Paper Presented at the Workshop “Gestures: A Comparison of Signed and Spoken Languages” at the 30 th Annual Meeting of the German Linguistic Society (DGfS), Bamberg, February 2008. Van Herreweghe, Mieke/Vermeerbergen, Myriam 2009 Flemish Sign Language Standardisation. In: Current Issues in Language Planning 10(3), 308⫺326. Vermeerbergen, Myriam 2006 Past and Current Trends in Sign Language Research. In: Language and Communication 26(2), 168⫺192.
43. Transcription
1045
Volterra, Virginia/Corazza, Serena/Radutsky, Elena/Natale Francesco 1984 Italian Sign Language: The Order of Elements in the Declarative Sentence. In: Loncke, Filip/Boyes-Braem, Penny/Lebrun, Yvan (eds.), Recent Research on European Sign Languages. Lisse: Swets and Zeitlinger, 19⫺48.
Mieke Van Herreweghe, Ghent (Belgium) Myriam Vermeerbergen, Antwerp & Leuven (Belgium)
43. Transcription 1. 2. 3. 4. 5. 6.
Introduction Transcription at the level of phonology Transcription at the level of morphology Multimedia tools Conclusion Literature and web resources
Abstract The international field of sign language linguistics is in need of standardized notation systems for both form and function. This chapter provides an overview of available means of notating components of manual signs, non-manual devices, and meaning. Attention is also paid to problems of representing simultaneous articulators of hands, face, and body. A final section provides an overview of several tools of multimedia analysis. Standardization, in the twenty-first century, requires attention to computer-based storage and processing of data; numerous links are provided to web-based facilities. Throughout, the chapter addresses theoretical problems of defining and relating linguistic levels of analysis in the study of sign languages. “What is on a transcript will influence and constrain what generalizations emerge”. Elinor Ochs (1979, 45)
1. Introduction Transcription serves a number of functions, such as linguistic analysis, pedagogy, providing deaf signers with a writing system, creating input to an animation program, and others. Because this chapter appears in a handbook of sign language linguistics, we limit ourselves to those notation systems that have played a role in developing and advancing our understanding of sign languages as linguistic systems. Although most notation schemes have been devised for the descriptive study of particular sign lan-
1046
IX. Handling sign language data
guages, here we aim at the goals of an emerging field of sign language linguistics, as exemplified by other chapters in this volume. The field of sign language linguistics is rapidly expanding in scope, discovering and describing sign languages around the world and describing them with greater depth and precision. At this point, successful descriptive and typological work urgently requires consensus on standardized notations of both form and function. The study of spoken languages has a long history, and international levels of standardized notation and analysis have been achieved. We begin with a brief overview of the sorts of standardization that can be taken as models for sign language linguistics. In 1888, linguists agreed on a common standard, the International Phonetic Alphabet (IPA), for systematically representing the sound segments of spoken language. Example (1) presents a phonetic transcription of an American English utterance in casual speech, “I’m not gonna go” (Frommer/Finnegan 1994, 11). (1)
amnátgunegó
The IPA represents an international consensus on phonological categories. On the level of morphology, basic categories have been used since antiquity ⫺ in India, Greece, Rome, and elsewhere ⫺ with various sorts of abbreviations. For the past several generations, the international linguistic community has agreed on standard terms, such as SG or sg or Sg for ‘singular’, regardless of varying theoretical persuasions, and with minimal adjustments for the language of the publication (e.g., Russian ed.č. as a standard abbreviation for edinstvennoe čislo ‘singular’). In the first issue of Language, in 1925, the Linguistic Society of America established a model of printing foreign language examples in italics followed by translations in single quotes. And for the past four or five decades, an international standard of interlinear morpheme glossing has become widespread, most recently formulated by The Leipzig Glossing Rules (based, in part, on Lehmann (1982, 1994); see section 6 for website). Example (2) presents the format that is now used in most linguistic publications, with interlinear morpheme-bymorpheme glosses and a translation in the language of the publication. Examples under analysis can be presented in various notations, as needed ⫺ generally in the orthography of the source. Grammatical morphemes are indicated by glosses in small caps, following a standard list; lexical items are given in the language of the publication (which we refer to as the ‘description language’). The term ‘gloss’ refers both to the grammatical codes and translations of lexical items in the second line. There is a strict correspondence of units, indicated by hyphens, in both lines. A free translation or paraphrase is given, again in the language of the analysis, at whatever level of specificity is necessary for exposition. In the Turkish example in (2), dat is dative case, acc is accusative case, and pst is past tense. (2)
Dede çocuğ-a top-u ver-di grandfather child-dat ball-acc give-pst ‘Grandfather gave (the) child (the) ball.’
[Turkish]
The Leipzig Glossing Rules note: “Glosses are part of the analysis, not part of the data” (p. 2). In this example, there would be little disagreement about dat and pst, but more arguable examples are also discussed on the website. A profession-wide standard makes it possible to carry out systematic crosslinguistic, typological, and diachronic analyses without having a command of every language in
43. Transcription
1047
the sample, and without knowing how to pronounce the examples. To give a simple example, compare (2) with its Korean equivalent in (3) (from Kim 1997, 340). The sentence is not given in Korean orthography, which would be impenetrable to an English-language reader; rather, a standard romanization is presented. Additional grammatical terms used in the Korean example are nom (‘nominative’), hon (‘honorific’), and decl (‘declarative’). (3)
Halapeci-ka ai-hanthey kong-ul cwu-si-ess-ta grandfather-nom child-dat ball-acc give-hon-pst-decl ‘Grandfather gave (the) child (the) ball.’
[Korean]
Comparing (2) and (3), one can see that the Turkish and Korean examples are both verb-final; that subject, indirect object, and direct object precede the verb; and that verb arguments receive case-marking suffixes. In addition, Korean has an honorific marker. One might propose that the two languages are typologically similar. This is a quick comparison of only two utterances, but it illustrates the value of standardized morphological glossing in crosslinguistic and typological comparisons. Morphosyntactic analysis generally does not require phonological transcription or access to audio examples of utterances. The study of sign languages is historically very recent, and the field has not yet achieved the level of careful standardization that is found in the linguistics of spoken languages. In this chapter, we give a brief overview of several attempts to represent the forms and meanings of signed utterances on the printed page. Notation systems proliferate, and we cannot present all of them. We limit ourselves to the task of transcription, which we understand as the representation of signed utterances (generally preserved in video format) in the two-dimensional, linear medium of print. A transcription makes use of a notation system ⫺ that is, a static visual means of capturing a signed performance or presenting hypothetical sign language examples. Various formats have been used for notation, because signed utterances make simultaneous use of a number of articulators. Formats include subscripts, superscripts, and parallel horizontal arrays of symbols and lines, often resembling a musical score. In this chapter, we are concerned with the portions of notation systems that play a role in linguistic analysis. In this regard, it is particularly important to independently indicate form ⫺ such as handshape or head movement ⫺ and function ⫺ such as reference to shape or indication of negation. Because forms can be executed simultaneously ⫺ making use of hands, head, face, and body ⫺ sign language transcription faces particular challenges of representing co-occurring forms with varying temporal contours. The chapter treats issues of form and function separately. Section 2 deals with issues of representing signed utterances in terms of articulation and phonology. Section 3 attends to levels of meaning, including morphology, syntax, and discourse organization. The fourth section deals with multimedia formats of data presentation, followed by a concluding section.
2. Transcription at the level of phonology Over the past 40 years, sign languages achieved recognition as full human languages, in part as a result of the analysis of formational structure. The pioneer in linguistic
1048
IX. Handling sign language data
description of sign language was William Stokoe at Gallaudet University in Washington, DC (see chapter 38 on the history of sign linguistics). To appreciate Stokoe’s contribution, we step back briefly to consider the distinction between writing and transcription. Writing is so much a part of the life of those who read and write easily that it is sometimes difficult to remember that writing is actually a secondary form of language, derived from continuous, ephemeral, real-time expressive forms (speech or signing). The representation of language in written form, for many languages, separates words from one another (prompting the comment from at least one literate adult fitted with a cochlear implant, “I can’t hear the spaces between the words”). Written form eliminates many of the dynamic characteristics of speech, including pitch, intonation, vowel lengthening, pausing, and other time-relevant aspects of the signal. We include imitative grunts and growls in vocal narrative to describe the sounds made by our car, or the surprise felt when snow slides off the roof just as we open the door. We use a variety of typographic conventions (punctuation, bold face, italics) to indicate a few of those dynamics, but we tolerate the loss of many of them as well. When we look at signing with the idea of preserving the ephemeral signal in written form, we are confronted by decisions about what is language and what is an expressive gesture. There is not a long tradition of writing, nor standards for what ought to be captured in written form, for signing communities. There is no Chaucer, Shakespeare, or Brontë to provide models for us of written signing from hundreds of years ago. The recordings of deaf teachers and leaders in the US from the early part of the twentieth century provide us with models of more and less formal discourse (Padden 2004; Supalla 2004). The Hotchkiss lecture (1912), recalling his childhood memories of Laurent Clerc, includes a few examples of presumably intentionally humorous signs (the elongated long) that illustrate a dynamic which might be difficult to capture in writing. For those who haven’t seen this example, the sign which draws the index finger of the dominant hand along the back of the opposite forearm is performed by drawing the finger from wrist to shoulder and beyond, followed by a laugh. Moreover, the distinction between ordinary writing (whether listmaking or literature) and scientific notation is a further refinement of what counts as worthy of representation. This chapter focuses on scientific notations that are capable of recording and representing sign languages, especially as they have evolved in communities of use. Scientific notation for language ⫺ transcription ⫺ aims to include just enough symbols to represent all the signs of any natural (sign) language (and probably some of the invented systems that augment the languages). Scientific notations typically do not have good tools or symbols for some of the dynamic aspects (pitch, in speech; size of signing space in sign), unless the aspects are systematic and meaningful throughout a speech community. Otherwise, there are side comments, footnotes, or other extranotational ways of capturing these performance variants.
2.1. Stokoe Notation William Stokoe posited that the sign language in North America used conventionally by deaf people is composed of a finite set of elements that recombine in structured ways to create an unlimited number of meaningful ‘words’ (Stokoe 1960 [1978]).
43. Transcription
1049
Stokoe’s analysis went further, to define a set of symbols that notate the components of each sign of American Sign Language (ASL). He and his collaborators used these symbols again in A Dictionary of American Sign Language on Linguistic Principles (Stokoe/Casterline/Croneberg 1965), the first comprehensive lexicon of a sign language arranged by an order of structural elements, rather than by their translation into a spoken (or written language), or by semantic classes. In those days before easy access to computers, Stokoe commissioned a custom accessory for the IBM Selectric typewriter that would allow him and others access to the specialized symbol set to be able to write ASL in its own ‘spelling system’. Stokoe Notation claims to capture three dimensions of signs. He assigned invented names to these dimensions, tabula or tab for location, designator or dez for handshapes, and signation or sig for movement. A close examination of the Dictionary shows that at least one (and perhaps two or three) additional dimensions are encoded or predictable from the notations, namely, orientation of the hands, position of hands relative to each other, and change of shape or position. It’s largely an emic system ⫺ that is, it is aimed at the level of categorical distinctions between sign formatives (cf. phonemic as contrasted with phonetic). In that regard, it is ingeniously designed and parsimonious. Stokoe Notation makes use of 55 symbols. Capital letter forms are used for 19 symbols (including digits where appropriate) indicating dez (handshapes), with just a few modifications (e.g., erasing the top of the number 8 to indicate that thumb and middle finger have no contact in this handshape, as they would for the number 8 in ASL (s)). The system uses 12 symbols (including a null for neutral space) for tab (locations), which evoke depiction of body parts, such as h for forehead or a for wrist surface of a fist. Finally, 24 symbols (including , and n, for example) indicate movements of the hands (sig). Positions of the symbols within the written form are important: the order of mention is tab followed by dez followed by sig. The position of sig symbols stacked vertically indicates movements that are realized simultaneously (such as circling in a forward [away from the signer’s body] motion), while sig symbols arranged left to right indicate successive motions (such as contact followed by opening of the hand). (4)
Ø5 v• Ο ɒ Ο ɒ a,a∞ 55>< h5’5∞ grandfather child give ball ‘Grandfather gives (or gave) the child the ball.’
[ASL]
Example (4) shows the four signs that participate in the utterance ‘Grandfather gave the child the ball’, but it does not account for at least two important adjustments on the sign for ‘give’ that would happen to the signs in the performance of this sentence in ordinary ASL. The sign for ‘give’ would likely assimilate handshapes to the sign for ‘ball’, to indicate transfer of the literal object, and would be performed downward, from the position of an adult to the position of a (smaller) child. Nor does the sequence of signs noted show eye gaze behavior accompanying the signs. Stokoe Notation writes from the signer’s point of view, where the assumption is that asymmetrical signs are performed with the right hand dominant, as some of the notational marks refer to ‘left’ or ‘right’. The system was intended for canonical forms and dictionary entries, rather than for signs as performed in running narrative or conversation. It does not account for morphological adjustments to signs in utterances,
1050
IX. Handling sign language data
nor for timing (velocity, acceleration, pausing), overlap (between interlocutors), grammatical derivation, performance errors, or ordinary rapid ‘speech’ phenomena. Stokoe’s notation makes no attempt to account for movement of the body in space, which an etic system would do ⫺ that is, a system aimed at the fine-grained components of ‘phonemic’ categories. Stokoe, late in his life, made the case that it was etic, but did not propose additional symbols or combinations of symbols to illustrate how the notation fulfilled that goal. Idiosyncratic variants of this notation system can render some, but not all, of the adjustments needed to capture signs in live utterances rather than as single lexical items out of context.
2.1.1. Modifications of Stokoe Notation An important translation of Stokoe Notation was conducted by a team at Northeastern University in the late 1970s. Mark Mandel’s (1993) paper about a computer-writeable transliteration system proposed a linear encoding into 7-bit ASCII. This ASCII version was successfully used to encode the Dictionary of ASL in order to, for example, use automatic means to tally 1628 non-compounds, by interrogating all the dictionary entries. The system classified that subset of the whole Dictionary of ASL into those that use two hands with symmetrical handshapes or those which use two hands where one hand is the base and the other hand the actor (a dominance relationship). The advantages of an encoding system which can be parsed by machine go well beyond this simple tally, and would allow comparison of encoded utterances at many levels of language structure.
2.1.2. Influence on the analysis of other sign languages Stokoe’s analysis (and dictionary) has influenced the analysis of many other sign languages, and served as a model for dictionaries of Australian Sign Language (Auslan) and British Sign Language (BSL), among others. Elena Raduzky (1992) in conjunction with several Italian collaborators, and with Lloyd Anderson, a linguist who specializes in writing systems, modified Stokoe’s notation to consider the difference between absolute (geometric) space and the relative space as performed by an individual. As might be anticipated, the Italian team found gaps in the handshape inventory, and especially in the angle of movement. Their analysis was used in the Dictionary of Italian Sign Language.
2.2. Sign Font Developed by Emerson and Sterns, an educational software company, Sign Font was created by a multidisciplinary team, under a small business grant from the US government. The project’s software specialists worked in conjunction with linguists (Don Newkirk and Marina McIntire) fluent in ASL, deaf consultants, and an artist. The font itself was designed by Brenda Castillo (Deaf) and the late Frank Allen Paul (a hearing
43. Transcription
1051
sign language artist), with explicit design criteria (such as print density, optimality of code, and sequential alphabetic script). Field tests with middle school age students showed that Sign Font is usable after relatively brief instruction. Example (5) gives the Sign Font version of the proposition presented in Stokoe Notation in (4). In contrast to (4), in example (5), the sign for ‘give’ follows the sign for ‘ball’ and incorporates the handshape indicating the shape of the item given. (This handwritten example is based on the Edmark Handbook (Newkirk 1989b) and Exercise book (Newkirk 1989a) which explicate the principles for using SignFont and give a number of examples. Handwritten forms are presented here to underline the intended use of Sign Font as a writing system for the Deaf, in addition to a scientific tool. The same is true of SignWriting, discussed in the following section.) (5)
‘Grandfather gives (or gave) the child the ball.’ In Sign Font, the order of elements is different from Stokoe notation: Handshape, Action Area, Location, Movement. Action Area is distinctive to this system; it describes which surface or side of the hand is the focus of the action. (In Stokoe notation, the orientation of the hands is shown in subscripts to the handshapes. The relationship between the hands would be shown by a few extra symbols for instances in which the two hands grasp, link, or intersect each other.)
2.3. SignWriting While we acknowledge the attempts within Stokoe Notation to create symbols that are mnemonic in part because of their partial pictorial representation, other systems are much more obvious in their iconicity. Sutton’s SignWriting, originally a dance notation, later extended from ballet to other dance forms, martial arts, exercise, and sign language(s), was intended to be a means of capturing gestural behavior in the flow of performance. It joins Labanotation in the goal of memorializing ephemeral performances. In its sign language variant, SignWriting looks at the sign from the viewer’s point of view, and has a shorthand (script) form for live transcription. The system is made up of schematized iconic symbols for hands, face, and body, with additional notations for location and direction. Examples can be found in an online teaching course (see section 6 for website). Example (6) gives the SignWriting version of the proposition previously transcribed in (4) and (5). In this example, the signs for ‘grandfather’ and ‘child’ are followed by pointing indices noting the spatial locations assigned to the two participants, and again, the sign for ‘ball’ precedes the sign for ‘give’, which again incorporates the ball’s shape
1052
IX. Handling sign language data
and size relative to the participants. Note that the phrasal elements are separated by horizontal strokes of various weights (the example is also available at http://www.signbank.org/signpuddle). While SignWriting is usually written in vertical columns, it is presented here in a horizontal arrangement to save printing space. (6)
‘Grandfather gives (or gave) the child the ball.’ Since its introduction in the mid-1970s, SignWriting has been expanded and adapted to handle more complex sign language examples, and more different sign languages. As of this writing, www.signbank.org shows almost 40 countries that use SignWriting, and catalogues signs from nearly 70 sign languages, though the inventories in any one may be only a few signs. SignWriting still is nurtured in a family of movement notation systems from Sutton. SignWriting has its own Deaf Action Committee to vet difficult examples and advocate for the use of this writing system with schools and communities where an existing literacy tradition may be new to the deaf population. Modifications to SignWriting have added detailed means of noting facial gestures. When written in standard fashion ⫺ with vertical columns arranged from left to right ⫺ phrases or pausing structures can be shown with horizontal marks of several weights. That is, each sign is shown as a block of symbols. The relationship of the head to the hand or hands reflects the starting positions in physical space, and is characterized by the relative positions within the block. The movement arrows suggests a direction, though the distance traversed is not literally depicted. The division into handshape, location, and movement types is augmented by the inclusion of facial gestures, and symbols for repeated and rhythmic movements. Symbols with filled spaces (or made with bold strokes) contrast with open (empty) versions of the same symbols to indicate orientation toward or away from the viewer. Half-filled symbols indicate hands which are oriented toward the centerline of the body (neither toward nor away from the viewer). In a chart by Cheryl Wren available from the signbank.org website (see section 6 for link), symbols used for ASL are inventoried in their variations, categorized by handshape (for each of 10 different shapes); 39 different face markings (referring to brows, eyes, nose, and mouth); and movements distinguishing plane and rotation as well as internal movement (shaking, twisting, and combinations). This two-page chart does not give examples of signs that exemplify each of the variants but the website does permit searching for examples by each of the symbols and within a particular sign language. Given that the symbol set for SignWriting allows for many variations in orientation of the sign, the writer may choose to write a more standardized (canonical) version or may record a particular performance with all its nuanced variants of ‘pronunciation’. The notation however does not give any hint of morphological information, and may disguise potential relationships among related signs, while capturing a specific utterance in its richness.
43. Transcription
1053
2.4. Hamburg Notation System (HamNoSys) The Hamburg Notation System (HamNoSys) was developed by a research team at Hamburg University. It was conceived with the idea of being useful for more than a single sign language, but with its understanding of signs following Stokoe’s general analysis. Moreover, it was developed in conjunction with a standard computer font, keyboard mapping, and including new markings for specific locations (regions of the body) at ever more detailed levels (the full system can be found online, along with Macintosh and Windows fonts; see section 6 for website). The catalogue of movements, for example, distinguishes between absolute (the example shows contact with one side of the chest followed by the other) and relative movements; and path movements are distinguished from local movements (where there is no change of location accompanying the movement of the sign). Non-manual movements (especially head shakes, nods, and rotations) are also catalogued within the movement group. Example (7) gives the HamNoSys version of our proposition, ‘grandfather give child ball’, with every line representing one sign. In this example again, the signs for ‘grandfather’ and ‘child’ are followed by pointing indices noting the spatial locations assigned to the two participants, and again, the sign for ‘ball’ precedes the ‘give’ which again incorporates the ball’s shape and size relative to the participants. (7)
grandfather index child index ball give-(ball) ‘Grandfather gives (or gave) the child the ball.’
2.5. Discussion and comparison In a contribution to the Sign Language Linguists List (SLLING-L), Don Newkirk (1997) compares the several writing systems for the audience of the list: What most distinguishes many of the linear notations, including SignFont, HamNoSys, [Newkirk’s] early “literal orthography”, and Stokoe, from SignWriting and DanceWriting lies more in the degree to which the linguistic structure of the underlying sign is expressed in the mathematical structure of the formulas into which the however iconic symbols of the script are introduced. The 4-dimensional structure of signs is represented in SignFont, for example, as 2-dimensional iconic symbols presented in a 1-dimensional string in the time domain. In SignWriting, the 3 spatial dimensions are more doggedly shown in the notation, but much of the 4th dimensional character of signing is obscured in the quite arbitrary (but rich) movement set.
1054
IX. Handling sign language data
A longer paper describing Newkirk’s “literal orthography” (one that uses an ordinary English typewriter character set to notate signs), that is, his analysis of ASL, appears to no longer be available from the website. Whereas SignWriting is used in various pedagogical settings (often as a writing system for deaf children), HamNoSys is used for linguistic analysis, initially by German sign language linguists, and later by others. Although SignWriting has been applied to several languages, Stokoe notation and HamNoSys have been most deeply used in the languages for which they were developed: ASL for Stokoe, German Sign Language (DGS) for HamNoSys. The Dutch KOMVA Project (Schermer 2003) used a notation system based on Stokoe’s notation. Their inventory of regionally distinct signs (for the same meanings) established regional variants in order to work toward a national standard on the lexical level (see chapter 37, Language Planning, for details). Thus, there remains a gap for a standard, linguistically based set of conventions for the transcription of sign languages on the phonetic and phonological levels. SignWriting does not account for the movement of the body in space, but has the potential to do so given its origins as a dance notation system. It does not capture timing information, nor interaction between participants in conversation. (As a dance notation it ought to be able to consider at least two and probably more participants.) The several systems we have briefly surveyed share two common challenges to ease of use: transparency of conventions and computational implementation. Experience in using these systems makes it clear that one cannot make proper notations without knowledge of the language. For example, segmentation of an utterance into separate signs is often not visually evident to an observer who does not know the language. In addition, the fact that all of the systems mentioned here use non-roman character sets would have prevented them from sharing common input methods, keyboard mapping, and more importantly, compatible searching and sorting methods to facilitate common access to materials created using these representations. Mandel’s 7-bit ASCII notation for the Stokoe system was one attempt to surmount this problem. Creating a Unicode representation for encoding non-ASCII character sets on personal computers and on the web is relatively straightforward technologically, and has already been done for the SignWriting fonts. The notation of a sign language using any of these systems will still yield a representation of the signs for their physical forms, rather than a more abstract level of part of speech or morphological components. That is, the signs written with Stokoe’s notation, SignWriting, or HamNoSys give forms at a level equivalent to phonetic or phonological representation, rather on than morphological level. In the following section, we turn to problems of representing meaning ⫺ which can be classed as morphology, syntax, or discourse.
3. Transcription at the level of morphology 3.1. Lack of standardization The field of sign language linguistics is in disarray with regard to analysis of the internal components of signs ⫺ from the points of view of both grammatical morphology and
43. Transcription
1055
lexicon. In this chapter, we do not limit ourselves to strictly ‘grammatical’ components, because there is no consensus on the lines between syntactic, semantic, and pragmatic uses of the many co-occurring elements of signed utterances, including handshapes, movement, body orientation, eyes, mouth, lips, and more. Furthermore, signed utterances make use of gestural devices to varying degrees. Morphology, broadly construed, deals with the internal structure of words as related to grammatical structure ⫺ and this is the topic we address. Sign linguists are in agreement that most verbs are polycomponential (to neutralize Elisabeth Engberg-Pedersen’s (1993) term, polymorphemic). Nouns, too, are often polycomponential, and in many instances a single root form is used to derive both nouns and verbs. It is not always clear where to draw boundaries between words, especially in instances where part of a sign can be maintained during the production of a subsequent sign. We propose here that much more detailed descriptive data are needed, of many sign languages, before we can confidently sort components into the levels of traditional linguistic analysis of spoken languages. (And, indeed, those levels will be subject to reconsideration on the basis of ongoing sign language linguistics.) Sign language researchers are generally provided with minimal guidelines for transcription (see, for example, Baker/van den Bogaerde/Woll 2005; as well as papers in Bergman et al. 2001). Typically, lexical elements are transcribed in capital letter or small caps glosses, using words drawn from either the spoken language of the surrounding community or the language of publication, with many different types of subscripts, superscripts, and special symbols. Non-manual elements are commonly given in a line above the capital letter glosses. And a translation is provided in parentheses or enclosed in single quotes. Additional symbols such as hyphens, underscores, plus signs, circumflexes, and arrows are used differently from author to author. Moreover, there is no standardization of abbreviations (e.g., a point to self designating first person may be indicated by index-1, ix1, pro-1, me, and other devices). Papers sometimes include an appendix or a footnote listing the author’s notational devices. In most instances, examples cannot be readily interpreted without the provision of some pictorial means ⫺ photographs, line drawings, and nowadays, links to videoclips. All of this is radically different from the well-established standard format for the representation of components of spoken language utterances, as briefly discussed in section 1. In the following, we provide several examples of the diversity of notational devices. Many more can be found in the chapters of this handbook. Example (8) provides a typical ASL example, drawn at random from “The Green Book” of ASL instruction (Baker-Shenk/Cokely 1980, 148). The horizontal lines are useful in indicating the scope of the topic (“top”) and negation (“neg”) non-manuals ⫺ but note that there is no convenient way of using this sort of scope information in an automatic analysis of a database. Note, also, that an unspecified sign directed at the self is glossed here as me. top
(8)
neg
write paper, not-yet me ‘I haven’t written the paper yet.’
[ASL]
The examples in (9) to (12) illustrate quite different ways of indicating person. Examples (9) and (10) are from DGS (Rathmann/Mathur 2005, 238; Pfau/Steinbach 2003, 11), (11) is from ASL (Liddell 2003, 132), and (12) is from Danish Sign Language (DSL) (Engberg-Pedersen 1993, 57).
1056
IX. Handling sign language data
(9)
first(sg)asknonfirst(pl) ‘I asked them.’
[DGS]
(10)
px py blume xgeby index index flower agr.s:give:agr.o ‘S/he is giving him/her a flower.’
[DGS]
(11)
pro-1 look-at/y ‘I look at him/her/it.’
[ASL]
(12)
pronCfl deceiveCfr [fl = forward left, fr = forward right] ‘Hej deceives himi.’
[DSL]
Each of these devices is transparent if one has learned the author’s conventions. Each presents a different sort of information ⫺ e.g., first person singular versus first-person pronoun; direct object indicated by a subscript following a verb or by an arrow and superscript; directedness toward a spatial locus. And, again, there is no convenient automatic way of accessing or summarizing such information within an individual analysis or across analyses. We return to these issues below.
3.2. Problems of glossing The line of glosses ⫺ regardless of the format ⫺ is problematic from a linguistic point of view. In the glossing conventions for spoken languages, the first line presents a linguistic example and the second line presents morpheme-by-morpheme glosses of the first line. The first line is given either in standard orthography (especially if the example is drawn from a written language) or in some sort of phonetic or phonemic transcription. It is intended to provide a schematic representation of the form of the linguistic entity in question. For example, in Comrie’s (1981) survey of the languages of the then Soviet Union, Estonian examples are presented in their normal Latin orthography, such as (13) (Comrie 1981, 137): (13)
ma p-ole korteri-peremees I neg-be apartment-owner ‘I am not the apartment owner.’
[Estonian]
For languages that do not have literary traditions, Comrie uses IPA, as in example (14), from Chukchi, spoken by a small population in an isolated part of eastern Siberia (Comrie 1981, 250): (14)
tə-γətγ-əlqət-ərkən 1sg-lake-go-pres ‘I am going to the lake.’
[Chukchi]
In (13) and (14), the second line consists entirely of English words and standard grammatical abbreviations, and it can be read and interpreted without knowledge of the acoustic/articulatory production that formed the basis for the orthographic representa-
43. Transcription
1057
tion in the first line. (Note that in (13) ma is an independent word and is glossed as ‘I’, whereas in (14) tə- is a bound morpheme and therefore is glossed as ‘1sg-’.) In most publications on sign languages, there is no equivalent of the first line in (13) and (14). The visual/articulatory form of the example is sometimes available in one of the phonological notations discussed in section 2, or in pictorial or video form, or both. However, there is also no consensus on the information to be presented in the line of morpheme-by-morpheme glosses. Example (8) indicates non-manual expressions of topic and negation; examples (9⫺11) use subscripts or superscripts; (11) and (12) provide explicit directional information. Directional information is implicitly provided in (9) and (10) by the placement of subscripts on either side of a capital letter verb gloss. The first lines of (9), (11), and (12) need no further glossing and are simply followed by translations. By contrast, (10) provides a second line with another version of glossing. That second line is problematic: ‘Px’ is further glossed as ‘index’ before being translated into ‘s/he’, and the subscripts that frame the verb in the first line of glosses are replaced by grammatical notations in the second line of glosses. Beyond that, nothing is added by translating German blume and geb into English ‘flower’ and ‘give’. In fact, there is no reason ⫺ in an English-language publication ⫺ to use German words in glossing DGS (or Turkish words in glossing Turkish Sign Language, or French words in glossing French Sign Language, etc.). DGS is not a form of German. It is a quite different language that is used in German-speaking territory. Comrie did not gloss Chukchi first into Russian and then into English, although Russian is the dominant literary language in Siberia. The DGS and DSL examples in (9) and (12) appropriately use only English words. We suggest that publications in sign language linguistics provide glosses only in the language of the publication ⫺ that is, the description language. Thus, for example, an article published in German about ASL should not use capital letter English words, but rather German words, because ASL is not a form of English. This requires, of course, that the linguist grasp the meanings of the signs being analyzed, just as Comrie had to have access to the meanings of Estonian and Chukchi lexical items. The only proper function of a line of morpheme-by-morpheme glosses is to provide the meanings of the morphemes ⫺ lexical and grammatical. Other types of information should be presented elsewhere. Capital letter glosses are deceptive with regard to the meanings of signs. This is because they inevitably bring with them semantic and structural aspects of the spoken language from which they are drawn. For example, in a paper written in German about Austrian Sign Language (ÖGS), the following utterance is presented in capitals: du dolmetscher. It is translated into German as ‘Du bist ein Dolmetscher’ (= ‘You are an interpreter’) (Skant et al. 2002, 177). German, however, distinguishes between familiar and polite second-person pronouns, and so what is presumably a point directed toward a familiar addressee is glossed as the familiar pronoun du and again as ‘du’ in the translation. In English, the gloss would be you, translated as ‘you’. But ÖGS does not have familiar and polite pronouns of address. On some analyses, it does not even have pronouns. Glossing as index-2, for example, would avoid such problems. More seriously, a gloss can suggest an inappropriate semantic or grammatical analysis, relying on the use of words in the glossing language. Any gloss carries the part-ofspeech membership of a spoken language word, suggesting that the sign in question belongs to the same category. Frequently, such implicit categorizations are misleading. In addition, any spoken word “equivalent’ will be part of a range of constructions in
1058
IX. Handling sign language data
the spoken language, but not in the sign language. For example, on the semantic level, an ASL lexical item that requires the multiword gloss take-advantage-of corresponds to the meaning of the expression in an English utterance such as, ‘They took advantage of poorly enforced regulations to make an illegal sale’. However, the ASL form cannot be used in the equivalent of ‘I was delighted to take advantage of the extended library hours to prepare for my exams’. There is definitely a sense of ‘exploit a loophole’ or ‘get one over on another’ to the ASL sign, whereas the English expression has a different range of meanings. On the grammatical level, a gloss can suggest an inappropriate analysis, because words of the description language often fit into different construction types than words of the sign language. Slobin has recently discussed this issue in detail (Slobin 2008, 124): Consider the much-discussed ASL verb invite (open palm moving from recipient to signer). This has been described as a “backwards” verb (Meir 1998; Padden 1988), but what is backwards about it? The English verb “invite” has a subject (the inviter) and an object (the invitee): “I invite you”, for example. But is this what ASL 1.SGinvite2.SG means? If so, it does appear to be backwards since I am the actor (or subject ⫺ note the confusion between the semantic role of actor and the syntactic role of subject) and you are the affected person (or object). Therefore, it is backwards for my hand to move from you to me because my action should go from me to you. The problem is that there is no justification for glossing this verb as invite. If instead, for example, we treat the verb as meaning something like “I offer that you come to me”, then the path of the hand is appropriate. Note, too, that the open palm is a kind of offering or welcoming hand and that the same verb could mean welcome or even hire. In addition to the context, my facial expression, posture, and gaze direction are also relevant. In fact, this is probably a verb that indicates that the actor is proposing that the addressee move towards the actor and that the addressee is encouraged to do so. We don’t have an English gloss for this concept, so we are misled by whatever single verb we choose in English.
The problem is that signs with meanings such as ‘invite’ are polycomponential, not reducible to single words in another language. What is needed, then, is a consistent form of representation at the level of meaning components, comparable to morphemic transcription of spoken languages. We use the term meaning component rather than morpheme because we lack an accepted grammatical model of sign languages. What is a gesture to one analyst might be a linguistic element to another; what is a directed movement to a spatial locus in one model might be an agreement marker in another. If we can cut loose from favorite models of spoken language we will be in a better position to begin fashioning adequate notation systems for sign languages. Historically, we are in a period that is analogous to the early Age of Exploration, when missionaries and early linguists wrote grammars for colonial languages that were based on familiar Latin grammatical models. Linguistics has broadened its conception of language structures over the course of several centuries. Sign language linguistics has had only a few decades, but we can learn from the misguided attempts of early grammarians, as well as the more recent successes of linguistic description of diverse languages. To our knowledge, there is only one system that attempts to represent sign languages at the same level of granularity as has been established for morphological description of spoken languages. This is the Berkeley Transcription System (BTS), which we describe briefly in the following section.
43. Transcription
1059
3.3. A first attempt: the Berkeley Transcription System (BTS) The Berkeley Transcription System (BTS) was developed in the Berkeley Sign Language Acquisition Project in the 1990s (headed by Hoiting and Slobin), in order to deal with videotapes of child⫺caregiver interaction in ASL and Sign Language of the Netherlands (NGT). The system was developed by teams of signers, Deaf and hearing, in the US and the Netherlands, working with linguists and psycholinguists in both countries. Glosses of these two sign languages in English and Dutch made comparisons impossible, alerting the designers to the dangers of comparing two written languages rather than two sign languages. Furthermore, glosses in either language did not reveal the componential richness and multi-modal communication evident in the videos. In addition, it was necessary to type transcriptions in standard ASCII characters in order to carry out computer-based searches and summaries. The international child language field had already provided a standard transcription format, CHAT, linked to a set of search programs, CLAN. CHAT and CLAN are part of a constantly growing crosslinguistic archive of child language data, CHILDES (Child Language Data Exchange System; see section 6 for website). One goal of BTS is to enable developmental sign language researchers to contribute their data to the archive. BTS uses CHAT format and it is now incorporated into the CHILDES system; the full manual can be downloaded from the URL mentioned in section 6. A full description and justification of BTS, together with the 2001 Manual, can be found in Slobin et al. (2001); a concise overview is presented by Hoiting and Slobin (2001). In addition to ASL and NGT, BTS has been used in child sign language studies of DGS (unpublished) and BSL. In addition, Gary Morgan has applied BTS to a developmental study of BSL (see section 6 for a link to his “End of Award Project Summary”, which also includes some suggestions for improvements of BTS). BTS aims at a sign language equivalent of the sort of morpheme-by-morpheme gloss line established for spoken languages, as discussed at the beginning of this chapter. A full BTS transcription can provide information on various tiers, including phonology, translation, and notations of gesture and concurrent behavior. Our focus here is on the level of meaning and the task of notating those components of complex signs that can be productively used to create meaningful complex signs. These components are manual and non-manual. Because we are far from consensus on the linguistic status of all meaning components, BTS does not refer to them as “morphemes”; the eventual theoretical aim, however, is to arrive at such consensus and provide a means of counting morphemes for developmental and crosslinguistic analysis. Signs that are not made up of recombinable elements of form, such as the juxtaposed open palms meaning ‘book’, are simply presented in traditional capital letter form, book. Although this sign may have an iconic origin, its components are not recombined to create signs related in meaning. Note, however, that when a sideward reciprocating movement is added, NGT verbalizes the sign into ‘to read to’, and addition of a repeated up-and-down movement yields a verbal sign meaning ‘to study’. It is on the plane of polycomponentiality that analysis into meaning components is required.
3.3.1. Manual components of signs BTS is especially directed at the internal structure of verbs of location, placement, movement, transitive action, and the like ⫺ that is, verbs that are traditionally de-
1060
IX. Handling sign language data
scribed as ‘classifier predicates’. The ‘classifier’ is a handshape that identifies referents by indicating a relevant property of that referent (see chapter 8 for discussion). The function of such elements is not to classify or categorize, but to indicate reference. For example, in a given sign language, a human being may be designated by several different ‘classifiers’, which single out a discourse-relevant attribute of the person in question (upright, walking, adversarial, etc.). Therefore, BTS treats such meaning components as property markers, which are combined with other co-occurring components that specify event dimensions such as source, goal, path, manner, aspect, modality, and more (see Slobin et al. (2003) for justification of the replacement of ‘classifier’ by ‘property marker’). Property markers are indicated in terms of meaning, rather than form, using a standardized capital-letter notation. Lower-case letters are used to indicate the (roughly) morphological meaning of a category, with upper-case letters giving a specific meaning within the category. For example, all property markers are indicated by pm’, all locations are indicated by loc’, and so forth. The inverted-V (W) handshape that indicates an erect human being in some sign languages is notated as TL (two-legged animate being), rather than “inverted-V” or other formal designations which are properly part of phonological, rather than morphological transcription. The upper-case abbreviations are intended to be transparent to users, similar to acc (‘accusative’) in standard morphological codes. Underscores are used to create more complex semantic codes, such as PL_VL (plane-vertical = ‘vertical plane’) or PL_VL_TOP (plane-vertical-top = ‘the top of a vertical plane’). The designations are abbreviations of English words, for convenience of Englishspeaking users. But the language of presentation is independent of the rationale of the transcription system. In the NGT version of BTS, these codes have Dutch equivalents, in order to make the system accessible to Dutch users who may not be literate in English. Similar accommodation is familiar in linguistic descriptions of spoken languages where, for example, past in English-language publications corresponds to vgh (Vergangenheit = past) in German-language publications. Polycomponential signs are made up of many meaning components. These are separated by hyphens in order to allow for automatic counting of item complexity, on a par with morpheme counts in spoken languages. The model comes from linguistics, where a symbol such as cl (‘classifier’) is expanded by its specific lexical category. Consider the following two examples, cited by Grinevald and Seifart (2004): water itcl(liquid)-fall (Gunwinggu, p. 263); dem.prox-cl(disc) ‘this one (coin, button, etc.)’ (Tswana, p. 269). The punctuation format of BTS is designed to be compatible with CHAT, but maintaining the tradition of presenting meanings in capital letters. The meanings, however, are generally more abstract than English lexical glosses, especially with regard to verbs ⫺ the polycomponential heart of sign languages. Consider our standard example, ‘grandfather give child ball’, as transcribed from NGT in BTS format in (15). The three nouns do not require morphological analysis and are given in small caps. The two human participants are indexed at different spatial locations, (a) and (b), with the second location ⫺ that of the child ⫺ at a lower location (loc’INF = location: inferior). (15)
grandfather ix_3(a) ball child ix_3(b)-loc’INF pm’SPHERE-src’3(a)-gol’3(b)-pth’D
[NGT]
43. Transcription
1061
In contrast to the nouns, the verb is polycomponential, consisting of a cupped hand moving in a downward direction from one established locus in signing space to another. The verb thus consists of a property marker (SPHERE), a starting point of motion (src’3(a) = source: locus 3(a)), a goal of motion (gol’3(b) = goal: locus 3(b)), and a downward path (pth’D = path: down). By convention, a combination of src-gol entails directed motion from source toward goal. On this analysis, the NGT verb equivalent to ‘give’ in this context has four meaning components: pm, src, gol, pth.
3.3.2. Non-manual components of signing Facial cues, gaze, and body position provide crucial indications in sign languages, roughly comparable to prosody in speech and punctuation in writing. Indeed nonmanual devices are organizers, structuring meaning in connected discourse. On the utterance level, non-manuals distinguish topic, comment, and quotation; and speech acts are also designated by such cues (declarative, negative, imperative, various interrogatives, etc.). Non-manuals modulate verb meanings as well, adding conventionalized expressions of affect and manner; and gaze can supplement pointing or carry out deictic functions on its own. Critically, non-manuals are simultaneous with stretches of manual signing, with scope over the meanings expressed on the hands. Generally, the scope of non-manuals is represented by a line over a gloss, accompanied by abbreviations for functions, such as ‘neg’ or ‘wh-q’, as shown in example (8), above. Gaze allocation, however, is hardly ever notated, although it can have decisive grammatical implications. BTS has ASCII notational devices for indicating gaze and the scope of non-manual components, including grammatical operators, semantic modification, affect, discourse markers (e.g., agreement, confirmation check), and role shift. The following examples demonstrate types of non-manuals, with BTS transcription, expanding the scenario of ‘grandfather give child ball’. Grammatical operators ⫺ such as negation, interrogation, topicality ⫺ are temporally extended in sign languages, indicating scope over a phrase or clause. Modulations of meaning, such as superlative degree or intensity of signing, can have scope over individual items or series of signs. BTS indicates onset and offset of a non-manual by means of a circumflex (^), in order to maintain linear ASCII notation for computer analysis. For example, operators are indicated by ^opr’X …^, where X provides the semantic/functional content of the particular operator, such as ^opr’NEG in the following example. Here someone asserts that grandfather did not give the child a ball, negating the utterance represented above in example (15). (16)
grandfather ix_3(a) ball child ix_3(b)-loc’INF ^opr’NEG pm’SPHERE-src’3(a)-gol’3(b)-pth’D^
[ASL]
Discourse markers regulate the flow of communication between participants, checking for comprehension, indicating agreement, and so forth. In spoken languages, such markers fall into linguistic analysis when they are realized phonologically, and are problematic when expressed by intonation contours (and ignored when expressed by modulations of face or body). In both speech and sign, the full range of discourse markers deserves careful linguistic description and analysis. The BTS notation is
1062
IX. Handling sign language data
^dis’X … ^, following the standard notation convention of lower-case linguistic category and upper-case content category, and bracketed circumflexes indicating onset and offset of the non-manual. For example, in NGT a signer can check to see if a message has been taken up by means of a brief downward head movement accompanied by raised eyebrows and a held direct gaze at the addressee. This non-manual can be called a “confirmation check”, indicated as ^dis’CONF^, generally executed while the last manual sign is held. Discourse markers can be layered with other non-manuals, requiring notation of one or more embeddings. Gaze allocation serves a variety of communicative functions ⫺ indicating reference, tracing a path, alternating between narrator and participant perspective, and others. BTS takes gaze at addressee as default and uses a preposed asterisk to indicate shift in gaze to an object or location, indicating the target of gaze in lower-case letters (counting gaze as a meaning component is under discussion, cf. Hoiting 2009). For example, a shift in gaze to the child would be notated as *child. In example (17), grandfather looks at the child, points at himself and then tells the child that he (src’1) will give her (gol’2) the ball (pm’SPHERE). (17)
*child pnt_1 pm’SPHERE-src’1-gol’2-pth’D
[ASL]
Role shift is carried out by many aspects of signing that allow the signer to subtly and quickly shift perspective from self to other participants in a narrative. The means of role shift have not been explored in detail in the literature, and BTS provides only a preliminary way of indicating that the signer has shifted to another role, using the notation ‘RS …’. A simple example is presented in (18). The grandfather now shows his grandchild how to catch the ball, pretending he himself is the ball-catching child. He signs that he is a child and then role shifts into the child. This requires him to dwarf himself, looking upward to the ‘pretend’ grandfather, indicated by a superior location (loc’SUP), and lifting both his cup-shaped hands (pm’SPHERE) upward (pth’U = upward path). The role-shifted episode is bracketed with single quotes. (18)
child ix_1 'RS *loc’SUP pm’SPHERE-pth’U '
[ASL]
3.4. Transcribing narrative discourse Speech, unlike sign and gesture, is not capable of physically representing location and movement. Instead, vast arrays of morphological and syntactic devices are used, across spoken languages, to keep track of who is where and to shift attention from one protagonist or location to another. Problems of perspective and reference maintenance and shift are severe in the rapidly fading acoustic medium. Therefore, grammars make use of pronouns of various sorts, demonstratives, temporal deictics, intonation patterns, and more. Some of these forms are represented in standard writing systems; some are only hinted at by the use of punctuation; and many others simply are not written down, either in everyday writing or in transcription. For example, role shift can be indicated in speech by a layering of pitch, intonation, and rate, such as a rapid comment at lower pitch and volume, often with a different voice quality. Sign languages, too, make use of layered expressions, but with a wider range of options, including rate and magnitude,
43. Transcription
1063
but also many parts of the face and body. It is a challenge to systematically record and notate such devices ⫺ a challenge that must be met by careful descriptive work before designating a particular device as ‘linguistic’, ‘grammatical’, ‘expressive’, and so forth. Accordingly, we include all such devices under the broad heading of the expression of meaning in sign. Narrative discourse makes use of dimensions that are not readily captured in linear ASCII notation. A narrator sets up a spatial world and navigates between parts of it; the world may contain “surrogates” (Liddell 2003) representing entities in a real-life scale; the body can be partitioned to represent the signer and narrative participants (Dudis 2004); part of the body can remain fixed across changes in other parts, serving as a “buoy” (Liddell 2003) or a “referring expression” (Bergman/Wallin 2003), functioning to maintain reference across clauses. Attempts have been made to notate such complex aspects of discourse by the use of diagrams, pictures with arrows, and multilinear formats. Recent attempts make use of multimedia data presentations, with multilinear coding and real-time capture, as discussed in section 4. Here we only point out a few of the very many representations that have been employed in attempts to solve some of these problems of transcription.
3.4.1. Positioning and navigation in signing space Traditional linear notations, as well as BTS, can notate directionality by abbreviations of words such as left, right, forward, etc. But they can’t lay out a full space. There are many literal or near-literal depictions in the literature, including photographs with superimposed arrows, line drawings with arrows, and computer-generated stick figures with some kind of dynamic notation. We exclude these here from discussion of transcription, though they are very useful devices for aiding the reader in visualizing dynamic signing. One less literal notational technique consists in a schematized overhead view. For example, Liddell (2003, 106) combines linear transcriptions with diagrams to distinguish between multiple and exhaustive recipients of a directed sign such as askquestion. The superscript on the linear transcription distinguishes the two forms with explicit words in square brackets along with Liddell’s invented arrow turning back on itself, which “indicates that the hand moves along a path, such that the extent of the path points toward entities a, b, and c” (Liddell 2003, 365). The transcription lines are accompanied by overhead diagrams with a schematic arrow indicating two types of hand/arm movements, as illustrated in Figure 43.1. Morgan, in studies of narratives produced by BSL-signing children, uses two sorts of schematic diagram, accompanied by symbolic notation. In one example, Morgan (2005, 125) notates a description of a storybook picture in which a dog makes a beehive fall from a tree while a boy looks on in shock (Figure 43.2). The notation >< indicates mutual gaze between signer and addressee, contrasting with notations such as >> ‘look right’, ^< ‘look up and left’, and others. Time moves downward in this “dynamic space diagram”. Morgan (2006, 331) uses a different sort of diagram to map out what each of the two hands is doing separately, showing an overlap between “fixed referential space”
1064
IX. Handling sign language data
Fig. 43.1: A combination of linear descriptions and diagrams (Liddell 2003, 106, Fig. 4.8). Copyright © 2003 by Cambridge University Press. Reprinted with permission.
Fig. 43.2: Transcribing a description of a storybook picture (Morgan 2005, 125, Fig. 4). Copyright © 2005 by John Benjamins. Reprinted with permission.
Fig. 43.3: Separate transcriptions for right and left hand (Morgan 2006, 331, Fig. 13⫺7). Copyright © 2006 by Oxford University Press. Reprinted with permission.
43. Transcription
1065
(FRS) and “shifted referential space” (SRS) in a BSL narrative. The caption to his original figure, which is included in Figure 43.3, explains the notational devices. Many examples of such diagrams can be found in the literature, accompanied by the author’s guide to specialized notational devices. This seems to be a domain of representation of signed communication that requires more than a standardized linear notation, but the profession could benefit from a consensus on standardized diagrammatic representation.
3.4.2. Representing surrogates Liddell has introduced the notion of “surrogate” and “surrogate space” (1994; summarized in Liddell 2003, 141⫺175) in which fictive entities and areas are treated as if they were present in their natural scale, rather than as miniatures in signing space. A simple example is the direction of a verb of communication or transfer directed to an absent third person as if that person were present. If the person is a child, for example, the gesture and gaze will be directed downward (as in our example of grandfather giving a ball to a child). There is no standard way of notating surrogates; Liddell makes uses of diagrams drawn from mental space theory, in which “mental space elements” are blended with “real space”. Taub (2001, 82) represents surrogates by superimposing an imagined figure as a line drawing onto a space with a photograph of the signer. Both signs in Figure 43.4 mean ‘I give to her’, but in A the surrogate is an adult and in B it is a child. Again, a standardized notation is needed.
Fig. 43.4: Representation of surrogate space (Taub 2001, 82, Fig. 5.13). Copyright © 2001 by Cambridge University Press. Reprinted with permission.
1066
IX. Handling sign language data
3.4.3. Body partitioning Paul Dudis (2004; Wulf/Dudis 2005) has described how the signer can partition his or her body to simultaneously present different viewpoints on a scene or different participants in an event. So far, there is no established means of notating this aspect of signing. For example, Dudis (2004, 232) provides a picture of a signer demonstrating that someone was struck in the face. The signer’s face and facial expression indicates the victim, and the arm the assailant. Dudis follows Liddell’s (2003) use of the vertical slash symbol to indicate roles of body partitions, captioning the picture: “The |victim| and the |assailant’s forearm|”. Here, we have another challenge to sign language transcription.
3.4.4. “Buoys” as reference maintenance devices Liddell (2003) has introduced the term “buoy” to refer to a sequence of predications in which one hand is held in a stationary configuration while the other continues producing signs. He notes that buoys “help guide the discourse by serving as conceptual landmarks as the discourse continues” (2003, 223). Consider the following rich example from Janzen (2008), which includes facial expression of point of view (POV) along with separate action of the two hands. The right hand (rh) represents the driver’s vehicle, which remains in place as POV shifts from the driver to an approaching police van, as represented by the left hand (lh). Janzen (2008, 137) presents several photographs with superimposed arrows, a lengthy narrative description, and the following transcription, with a gloss of the second of three utterances (19b), presenting (19a) and (19c) simply as translations for the purposes of this example. (19)
Liddell simply presents his examples in series of photographs from discourse, with no notational device for indicating buoys. What is needed is a notation that indicates the handshapes and positions of each of the hands in continuing discourse, often accompanied by rapid shifts in gaze. Bergman and Wallin (2003) provide a multilinear transcription format for examples from Swedish Sign Language, making a similar observation but with different terminology. They offer a format with separate lines for head, brows,
43. Transcription
1067
face, eyes, left hand, right hand, and mouth. This format is only readable with reference to a series of photographs and accompanying textual description. In sum, we lack adequate notation systems for complex, simultaneous, and rapidly shifting components of signing in discourse. Various multimedia formats, as discussed in the following section, promise to provide convenient ways to access these many types of information, linking transcriptions and notations to video. For purposes of comparability across studies and databases and sign languages, however, standardized notation systems are still lacking.
4. Multimedia tools Thus far, we have made the case for a robust transcription system that can note in a systematic and language-neutral way the morphological (in addition to a phonological) level for discourse, whether a narrative from a single interlocutor, or dialogue among two or more individuals. We have discussed both handwritten and computer-supported symbol sets, sometimes for the same transcription tools. Let us make overt just a few of the implied challenges and advantages of computer-supported tools: ⫺ Input: consistency of input; potential for automated correction insofar as legitimate sequences of characters can be characterized within the tools (cf. spell-checking); note also that there is increasingly good capture from stylus input devices, which might allow automated translation of manual coding into a standard representation of symbols. ⫺ Searching, sorting, selecting: the potential for finding relative frequencies or simply occurrence or co-occurrence of elements at various levels is much simplified when the elements are machine readable, sortable, and searchable. ⫺ Output: multiple varieties of output are possible, from screen views to printed and dynamic media formats. The catalog of the LDC (Linguistic Data Consortium of the University of Pennsylvania) offers nearly 50 options of applications to use with linguistic data stored there, including (to choose just a few) content-based retrieval from digital video, discourse parsing, topic detecting and analysis, speaker identification, and part of speech tagging. While the current catalog has corpora from over 60 languages (and at least one nonhuman species of animal calls), it does not include data from ASL or other sign languages. However, one can easily imagine that the proper tools would allow equally easy access to investigate sign language data as LDC’s do today for spoken languages. There are of course all the disadvantages of computer-supported tools which are not specific to this domain, and just of few of which are mentioned here: ⫺ These applications may be initially limited to one operating system, a small number of fonts, or other criteria that make early prototyping and development possible on a budget, but also may limit the audience of possible users to a small niche among a specialized group.
1068
IX. Handling sign language data
⫺ As with all software serving a small audience, the costs of continuous updates and improvements may prove prohibitive. Some tools which have been well-conceived and well-executed may find themselves orphaned by economic factors. ⫺ The lack of standards in a new arena for software may cause a project to develop an application or product which becomes obsolete because it does not conform to a newly accepted standard. Some of the sign language transcription tools may fall into this trap. One recent discussion on SLLING-L got into details of what the consequences for SignWriting or HamNoSys would be in a world where UTF-8 becomes standard for email, web, and all other renderings of fonts. There are a number of additional features for transcription which are both desirable and either in existence now or about to be realized for individuals and laboratories which are devoted to sign language linguistic study.
4.1. Multitier coding capacity At least two tools are available that serve the sign language linguistics community, SignStream and ELAN, both multi-modal annotation and analysis tools.
4.1.1. SignStream The SignStream application was created largely at Boston University in collaboration with others, both computing specialists and sign language researchers, under funding from the US National Science Foundation and other federal agencies. It is a database tool specifically designed for managing sign language data in a multi-level transcription system, keeping the relationships in time constant while allowing ever more granular descriptions at each level of the coding. It displays time on a horizontal axis (items occurring on the left prior to items occurring on the right). It permits viewing more than one utterance at a time to allow side-by-side comparison of data. It has been designed to handle time-based media other than audio and video. The SignStream website (see section 6) gives May 2003 as the most recent release of the product (version 2.2), with a promised version 3 on the way.
4.1.2. ELAN Like SignStream, ELAN (formerly Eudico) is a linguistic annotation tool that creates tiers for markup, can coordinate transcription at each tier for distinct attributes, and can play back the video (or other) original recording along with the tiers. The tool is being developed at the Max Planck Institute for Psycholinguistics (see Wittenburg et al. 2006). ELAN is capable of coordinating up to four video sources, and of searching based on temporal or structural constraints. It is being used for both sign language projects, as part of ECHO (European Cultural Heritage Online), as well as for other studies of linguistic behavior which need access to multi-modal phenomena. ELAN also aims to deliver multimedia data over the internet with publicly available data
43. Transcription
1069
Fig. 43.5: Example of ELAN format (taken from: http://www.lat-mpi.eu/tools/elan/).
collections (see section 6 for the ECHO website and the website at which ELAN tools are available). Figure 43.5 shows a screen shot from ELAN, showing part of an NGT utterance. Note that the user is able to add and define tiers, delimit temporal spans, and search at varying levels of specificity. A project comparing the sign languages of the Netherlands, Britain, and Sweden is based at the Radboud University and the Max Planck Institute for Psycholinguistics, both in Nijmegen: “Language as cultural heritage: a pilot project with sign languages” (see section 6 for website). The project is part of the ECHO endeavor, and conventions are being developed for transcription within the ELAN system for articulatory behavior in manual stream and segmented areas of the face (brows, gaze, mouth movements, as well as mouth movements on at least two levels). Again these datasets include glossing and translation, but as yet there is no tier devoted to morphological structure. (We are aware of several studies in progress using BTS with ELAN to provide a morphological level for ASL and NGT data.)
4.2. Digital futures Sign language data which have been collected and analyzed to date are recorded at a constant frame rate. Tools specifically designed for sign languages may have an advantage which has been hinted at in this chapter, and implied by this volume, namely that the characteristics of human language in another modality may make us more able to see and integrate analyses which have been ignored by spoken language researchers
1070
IX. Handling sign language data
working with tools developed in the context of written language. Sign languages, like other non-written languages, bring our attention to the dynamic dimensions of communication phenomena in general. Bigbee, Loehr, and Harper (1991) compare several existing tools (including at least two targeted at the sign language analysis community). They comment on the ways that SignStream can be adapted to track tiers of interest to spoken language researchers as well (citing a study of intonation and gesture that reveals “complementary discourse functions of the two modalities”). They conclude with a “tentative list of desired features” for a next generation multi-modal annotation and analysis tool (reformatted here from their Table 3: Desired Features): ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺ ⫺
video stream(s) time-aligned with annotation; directly supports XML tagsets; time-aligned audio waveform display; acoustic analysis (e.g. pitch tracking) tools included; direct annotation of video; hide/view levels; annotation of different levels API and/or modular open architecture; music-score display; automatic tagging facilities; easy to navigate and mark start and stop frame of any video or audio segment; user can select current audio track from multiple available audio tracks; segment start and stop points include absolute time values (e.g. not just frames); user can create explicit relationships or links across levels; can specify levels and elements (attribute / values); inclusion of graphics as an annotation level; support for overlapping, embedding and hierarchical structures in annotation; easy to annotate metadata (annotator, date, time, etc.) at any given level or segment; some levels time-aligned, others are independent but aligned in terms of segment start/stop times; support for working with multiple synchronized video, audio, and vector ink media sources; import/export all annotations; cross platform execution; query/search annotations.
Rolfing et al. (2006) also provide a comparison of multi-modal annotation tools. We can imagine that sign language data are being collected from blogs, from video phone calls, and soon from mobile devices as well. Note that as data are increasingly collected from digital originals, our transcriptions will need to account for algorithms in the digital domain that are systematically enriching the signal or impoverishing it. Consider the case of the Mobile-ASL development. This University of Washington project is at present a proof of concept only, and not a product, but it is being developed with an eye to standards (in particular the H.264 video encoder). The researchers are optimizing the signal to transmit with smart compression, showing fewer frames from the person who is quiet, and more frames per second from the signer (Cavender et al. 2006; Cherniavsky et al. 2007). They also work within the video standard to
43. Transcription
1071
recognize regions of the screen that transmit more information (hands, arms, face), and ignore regions that are not contributing much (below the waist). Fingerspelling requires more frames for intelligibility than most signs (especially on the small screen of a mobile phone) and thus that region is given higher frame rate when fingerspelling is detected. What other aspects of signing might need a more rich signal?
5. Conclusion In conclusion, the study of sign language linguistics has blossomed in recent years, in a number of countries. Along with this growth have come tentative systems for representing sign languages, using a range of partial and mutually incompatible notational and storage devices. International conferences and discussion lists only serve to emphasize that the field is in a very early stage, as compared with long traditions in the linguistics and specifically, the transcription of spoken languages. There are many good minds in play, and much work to be done. It is fitting to return to Elinor Ochs’s seminal 1979 paper, “Transcription as Theory”, which provided the epigraph to our chapter. Ochs was dealing with another sort of unwritten language ⫺ the communicative behavior of children. She concluded her chapter, some 30 years ago, with the question, “Do our data have a future?” (Ochs 1979, 72). We share her conclusion: A greater awareness of transcription form can move the field in productive directions. Not only will we be able to read much more off our own transcripts, we will be better equipped to read the transcriptions of others. This, in turn, should better equip us to evaluate particular interpretations of data (i.e., transcribed behavior). Our data may have a future if we give them the attention they deserve.
Acknowledgements: The authors acknowledge the kind assistance of Adam Frost, who created the SignWriting transcription for this occasion, on the recommendation of Valerie Sutton, and of Rie Nishio, a graduate student at Hamburg University, who provided the HamNoSys transcription, on the recommendation of Thomas Hanke.
6. Literature and web resources Baker, Anne/Bogaerde, Beppie van den/Woll, Bencie 2005 Methods and Procedures in Sign Language Acquisition Studies. In: Sign Language & Linguistics 8, 7⫺58. Baker-Shenk, Charlotte/Cokely, Dennis 1980 American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Silver Spring, MD: TJ Publishers. Bergman, Brita/Boyes-Braem, Penny/Hanke, Thomas/Pizzuto, Elena (eds.) 2001 Sign Transcription and Database Storage of Sign Information (Special Issue of Sign Language & Linguistics 4(1/2)). Amsterdam: Benjamins. Bergman, Brita/Wallin, Lars 2003 Noun and Verbal Classifiers in Swedish Sign Language. In: Emmorey, Karen (ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 35⫺52.
1072
IX. Handling sign language data
Bigbee, Tony/Loehr, Dan/Harper, Lisa 1991 Emerging Requirements for Multi-modal Annotation and Analysis Tools. In: Proceedings, Eurospeech 2001; Special Event: Existing and Future Corpora ⫺ Acoustic, Linguistic, and Multi-modal Requirements. [Available at: http://www.mitre.org/work/tech_ papers/tech_papers_01/bigbee_emerging/index.html] Cavender, Anna/Ladner, Richard E./Riskin, Eve A. 2006 Mobile ASL: Intelligibility of Sign Language Video as Constrained by Mobile Phone Technology. In: Assets ’06: Proceedings of the 8 th International ACM SIGACCESS Conference on Computers and Accessibility. Cherniavsky, Neva/Cavender, Anna C./Ladner, Richard E./Riskin, Eve A. 2007 Variable Frame Rate for Low Power Mobile Sign Language Communication. In: ACM SIGACCESS Conference on Assistive Technologies. [Available at: http://dub.washington. edu/pubs/79] Comrie, Bernard 1981 The Languages of the Soviet Union. Cambridge: Cambridge University Press. Dudis, Paul 2004 Body Partitioning and Real-space Blends. In: Cognitive Linguistics 15, 223⫺238. Emmorey, Karen (ed.) 2003 Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Lawrence Erlbaum. Engberg-Pedersen, Elisabeth 1993 Space in Danish Sign Language: The Semantics and Morphosyntax of the Use of Space in a Visual Language. Hamburg: Signum. Frishberg, Nancy 1975 Arbitrariness and Iconicity: Historical Change in American Sign Language. In: Language 51, 696⫺719. Frommer, Paul R./Finnegan, Edward 1994 Looking at Language: A Workbook in Elementary Linguistics. Fort Worth, TX: Harcourt Brace. Grinevald, Colette/Seifart, Frank 2004 Noun Classes in African and Amazonian Languages: Towards a Comparison. In: Linguistic Typology 8, 243⫺285. Hoiting, Nini 2009 The Myth of Simplicity: Sign Language Acquisition by Dutch Deaf Toddlers. PhD Dissertation, University of Groningen. Hoiting, Nini/Slobin, Dan I. 2002 Transcription as a Tool for Understanding: The Berkeley Transcription System for Sign Language Research (BTS). In: Morgan, Gary/Woll, Bencie (eds.), Directions in Sign Language Acquisition. Amsterdam: Benjamins, 55⫺75. Janzen, Terry 2008 Perspective Shifts in ASL Narratives: The Problem of Clause Structure. In: Tyler, Andrea/Kim, Yiyoung/Takada, Mari (eds.), Language in the Context of Use. Berlin: Mouton de Gruyter, 121⫺144. Kim, Young-joo 1997 The Acquisition of Korean. In: Slobin, Dan I. (ed.), The Crosslinguistic Study of Language Acquisition (Volume 4). Mahwah, NJ: Lawrence Erlbaum, 335⫺443. Lehmann, Christian 1982 Directions for Interlinear Morphemic Translations. In: Folia Linguistica 16, 199⫺224. Lehmann, Christian 2004 Interlinear Morphemic Glossing. In: Booij, Geert/Lehmann, Christian/Mugdan, Joachim (eds.), Morphology/Morphologie: A Handbook on Inflection and Word Formation/Ein Handbuch zur Flexion und Wortbildung (Volume 2). Berlin: Mouton de Gruyter, 1834⫺1857.
43. Transcription
1073
Liddell, Scott K. 1994 Tokens and Surrogates. In: Ahlgren, Inger/Bergman, Brita/Brennan, Mary (eds.), Perspectives on Sign Language Structure. Papers from the Fifth International Symposium on Sign Language Research. Durham: ISLA, 105⫺119. Liddell, Scott K. 2003 Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press. Mandel, Mark A. 1993 ASCII-Stokoe Notation: A Computer-writeable Transliteration System for Stokoe Notation of American Sign Language: http://www.speakeasy.org/~mamandel/ASCIIStokoe.html. Meir, Irit 1998 Syntactic-semantic Interaction in Israeli Sign Language Verbs: The Case of Backwards Verbs. In: Sign Language & Linguistics 1, 1⫺33. Morgan, Gary 2005 Transcription of Child Sign Language: A Focus on Narrative. In: Sign Language & Linguistics 8, 117⫺128. Morgan, Gary 2006 The Development of Narrative Skills in British Sign Language. In: Schick, Brenda/ Marschark, Marc/Spencer, Patricia E. (eds.), Advances in the Sign Language Development of Deaf Children. Oxford: Oxford University Press, 314⫺343. Newkirk, Don E. 1989a SignFont: Exercises. Bellevue, WA: Edmark Corporation. Newkirk, Don E. 1989b SignFont: Handbook. Bellevue, WA: Edmark Corporation. Newkirk, Don E. 1997 “Re: SignWriting and Computers”. Contribution to a Discussion on the Sign Language Linguistics List (SLLING-L: [email protected]); Thu, 13 Feb 1997 09:13:05 -0800. Ochs, Elinor 1979 Transcription as Theory. In: Ochs, Elinor/Schieffelin, Bambi B. (eds.), Developmental Pragmatics. New York: Academic Press, 43⫺72. Padden, Carol 1988 Interaction of Morphology and Syntax in American Sign Language. New York: Garland. Padden, Carol 2004 Translating Veditz. In: Sign Language Studies 4, 244⫺260. Pfau, Roland/Steinbach, Markus 2003 Optimal Reciprocals in German Sign Language. In: Sign Language & Linguistics 6, 3⫺42. Raduzky, Elena 1992 Dizionario della Lingua Italiana dei Segni [Dictionary of Italian Sign Language]. Rome: Edizioni Kappa. Rathmann, Christian/Mathur, Gaurav 2002 Is Verb Agreement the Same Crossmodally? In: Meier, Richard P./Cormier, Kearsy/ Quinto-Pozos, David (eds.), Modality and Structure in Signed and Spoken Languages. Cambridge: Cambridge University Press, 370⫺404. Rathmann, Christian/Mathur, Gaurav 2005 Unexpressed Features of Verb Agreement in Signed Languages. In: Booij, Geert/Guevara, Emiliano/Ralli, Angela(eds.), Morphology and Linguistic Typology, On-line Proceedings of the Fourth Mediterranean Morphology Meeting (MMM4). [Available at: http://pubman.mpdl.mpg.de/pubman/item/escidoc:403906:5]
1074
IX. Handling sign language data
Rolfing, Katharina et al. 2006 Comparison of Multimodal Annotation Tools ⫺ Workshop Report, Gesprächsforschung ⫺ Online-Zeitschrift zur Verbalen Interaktion 7, 99⫺123. [Available at: http:// www.gespraechsforschung-ozs.de/heft2006/tb-rohlfing.pdf] Schermer, Trude 2003 From Variant to Standard: An Overview of the Standardization Process of the Lexicon of Sign Language of the Netherlands Over Two Decades. In: Sign Language Studies 3, 469⫺486. Skant, Andrea/Okorn, Ingeborg/Bergmeister, Elisabeth/Dotter, Franz/Hilzensauer, Marlene/ Hobel, Manuela/Krammer, Klaudia/Orter, Reinhold/Unterberger, Natalie 2002 Negationsformen in der Österreichischen Gebärdensprache. In: Schulmeister, Rolf/ Reinitzer, Heimo (eds.), Progress in Sign Language Research: In Honor of Siegmund Prillwitz. Hamburg: Signum, 163⫺185. Slobin, Dan I. 2008 Breaking the Molds: Signed Languages and the Nature of Human Language. In: Sign Language Studies 8, 114⫺130. Slobin, Dan I./Hoiting, Nini/Anthony, Michelle/Biederman, Yael/Kuntze, Marlon/Lindert, Reyna/ Pyers, Jennie/Thumann, Helen/Weinberg, Amy 2001 Sign Language Transcription at the Level of Meaning Components: The Berkeley Transcription System (BTS). In: Sign Language & Linguistics 4, 63⫺96. Slobin, Dan I./Hoiting, Nini/Anthony, Michelle/Biederman, Yael/Kuntze, Marlon/Lindert, Reyna/ Pyers, Jennie/Thumann, Helen/Weinberg, Amy 2003 A Cognitive/Functional Perspective on the Acquisition of “Classifiers”. In: Emmorey, Karen (ed.), Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Lawrence Erlbaum, 271⫺296. Stokoe, William 1978 Sign Language Structure. Silver Spring, MD: Linstok Press [reprinted from 1960]. Stokoe, William C./Casterline, Dorothy C./Croneberg, Carl G. 1965 A Dictionary of American Sign Language on Linguistic Principles. Washington, DC: Gallaudet University Press. [revised edition, Silver Spring, MD: Linstok Press, 1976]. Supalla, Ted 2004 The Validity of the Gallaudet Lecture Films. In: Sign Language Studies 4, 261⫺292. Taub, Sarah F. 2001 Language from the Body: Iconicity and Metaphor in American Sign Language. Cambridge: Cambridge University Press. Wittenburg, Peter/Brugman, Hennie/Russel, Albert/Klassmann, Alex/Sloetjes, Han 2006 ELAN: a Professional Framework for Multimodality Research. In: Proceedings of the 5 th International Conference on Language Resources and Evaluation (LREC 2006), 1556⫺1559. [Available at: http://www.lat-mpi.eu/papers/papers-2006/elan-paperfinal.pdf] Wulf, Alyssa/Dudis, Paul 2005 Body Partitioning in ASL Metaphorical Blends. In: Sign Language Studies 5, 317⫺332.
Web resources: BTS (Berkeley Transcription System): http://ihd.berkeley.edu/Slobin-Sign%20Language/%282001%29%20Slobin,%20Hoiting%20et%20al%20-%20Berkeley%20Transcription%20System%20%28BTS%29.pdf CHILDES (Child Language Data Exchange System): http://childes.psy.cmu.edu ECHO: http://echo.mpiwg-berlin.mpg.de/home ELAN tools: http://www.lat-mpi.eu/tools/elan/ HamNoSys: http://www.sign-lang.uni-hamburg.de/Projects/HamNoSys.html.
44. Computer modelling
1075
“Language as cultural heritage: a pilot project with sign languages”; Radboud University and Max Planck Institute for Psycholinguistics, Nijmegen: http://www.let.ru.nl/sign-lang/echo/. Leipzig Glossing Rules (LGR): http://www.eva.mpg.de/lingua/resources/glossing-rules.php. MobileASL: http://dub.washington.edu/projects/mobileasl Morgan, Gary: Award Report “Exchanging Child Sign Language Data through Transcription”: http://www.esrcsocietytoday.ac.uk/my-esrc/grants/RES-000-22-0446/read SignStream: http://www.bu.edu/asllrp/SignStream/ SignWriting teaching course: http://signwriting.org/lessons/lessonsw/lessonsweb.html examples from various sign languages: www.signbank.org ASL symbol cheet sheet by Cherie Wren: http://www.signwriting.org/archive/docs5/sw0498ASLSymbolCheetSheet.pdf
Nancy Frishberg, San Carlos, California (USA) Nini Hoiting, Groningen (The Netherlands) Dan I. Slobin, Berkeley, California (USA)
44. Computer modelling 1. 2. 3. 4. 5. 6. 7. 8. 9.
Introduction Computational lexicography Computer corpora for sign linguistics research and grammar modelling Sign capture and recognition Automated signing (synthesis) Machine translation and animation Social challenges of automated signing Conclusion Literature and web resources
Abstract The development of computational technologies in sign language research is motivated by providing more information and services to deaf people. However, sign languages contain phenomena not seen in traditional written/spoken languages; therefore, they are increasingly challenging to traditional computational approaches. In this chapter, we give an overview of the different areas of computer-based technologies in this field. We briefly describe some current systems, also addressing their limitations and pointing out further motivation for the development of new systems.
1. Introduction In this chapter, we will focus on the presentation of fundamental research and development in computer-based technology, which open up new potential applications for sign
1076
IX. Handling sign language data
language communication and human-computer interaction. The possibilities have grown in recent years with rapid hardware development, more active linguistic research, and exploitation of 3D graphics technologies, resulting in many different applications for sign languages such as multimedia dictionaries (VCom3D 2004 (see section 9 for website); Buttussi/Chittaro/Coppo 2007), teaching materials (Sagawa/Takeuchi 2002; Karpouzis et al. 2007), and machine translation systems in the ViSiCAST project (Elliott et al. 2000, 2008) and from VCom3D (Sims 2000). The structure of the chapter reflects the main areas of the field. Section 2 explains differences between machine-readable dictionaries and lexicons and also mentions tools and methodologies within computational lexicography. Section 3 explains the role of electronic corpora for research and gives a short overview of existing corpus collections and their shortcomings and potential. Within section 3, we also describe annotation tools and standards for the use of an electronic corpus. Section 4 introduces the reader to sign language recognition techniques after a brief historical overview of the field. Section 5 is about automated signing or synthesis. In this section, we also give an example of how the lexicon and grammar can be modelled for computational purposes. Section 6 briefly describes some machine translation systems for sign languages with some aspects of generating animation. Last but not least, in section 7, we also mention some social challenges of automated signing, which involve misunderstandings about such research in both the hearing and deaf communities. Some sections might seem rather technical for the general reader; however, our aim is to give an overview of the complexity that is involved in this field and to motivate the interested reader to further study the topic.
2. Computational lexicography Computational lexicography is a discipline that is interconnected with linguistics and computer science. Lexicography ⫺ as the branch of applied linguistics concerned with the design and construction of lexicons ⫺ can benefit from linguistic research by giving an increasingly detailed description of the lexicon paradigmatically and syntagmatically (refinement of valences, syntactic and semantic classes, collocational restrictions, etc.). On the other hand, computational methods and tools are needed to automate dictionary construction and maintenance. However, there is a considerable difference between dictionaries/lexicons for human consumption or for machine-use. Handke (1995) distinguishes machine-readable dictionaries and machine-readable lexicons: the former are electronic forms of book dictionaries, that is, machine-readable versions of published dictionaries for referencing by a human (e.g. CD-ROM), whereas components of a natural language processing system of a machine are called lexicons. The former rely heavily on the linguistic and world knowledge of the user, which may not be suitable to computational processing of the language. A lexicon for machine-use has to be explicit, which means it has to contain a formal description of data, and has to be systematic and flexible (see also following sections). Therefore, the term ‘Computational Lexicography’ mostly refers to gathering lexical information for use by automated natural language processing systems, that is, developing lexicons for machine-use, but the term can also be extended
44. Computer modelling
1077
to the computational techniques in the development of dictionary databases for human use (Boguraev/Briscoe 1989). Paper dictionaries provide static images and/or descriptions of signs. However, they are not the best solutions, as it is not easy to represent movements on paper. Therefore, several researchers have proposed multimedia dictionaries for sign languages of specific countries (Wilcox et al. 1994; Sims 2000; BritishSignLanguage.com 2000 (see section 9 for website); amongst others), but there are only a few proposals for multilanguage dictionaries. Moreover, current multimedia dictionaries suffer from serious limitations. Most of them allow only for a word-to-sign search, while only a few of them exploit sign parameters (i.e., the basic units of signs: handshape, orientation, location, and movement). Therefore, Buttussi, Chittaro, and Coppo (2007) propose a sign-to-word and sign-to-sign search in an online international sign language dictionary, which exploits Web3D technologies. The user chooses the parameters (or ‘cheremes’; cf. Stokoe/Casterline/Croneberg 1976) and the H-Anim humanoid posture or movement are updated to preview the resulting sign. With the improving sign recognition techniques (see section 4), the sign search in dictionaries might become even more user-friendly. On the other hand, the function and use of a machine readable lexicon and its structure depends on its general application area, that is, whether it is interfaced with other modules such as a parser or a morphological component, and whether it is used interactively. Therefore, it is often the case that such purpose built lexicons cannot be generalised to other systems (Boguraev/Briscoe 1989). An example of a lexicon for generation purposes will be discussed in section 5.3. Computational methods have given rise to new tools and methodologies for building computational lexicons (iLex, as an example of such a tool, is described in section 3.2). In order to avoid judgements based on intuitions of linguists, evidence of the lexical behaviour of signs has to be found by analysis of corpora/unrestricted text (Ooi 1998). Statistical analysis can be used for checking consistency, detecting categories, word and collocation frequencies, links between grammar and the lexicon, etc. In order to gain a greater linguistic understanding, many researchers advocate annotation (a structural mark-up or tagging) of the corpus. In Section 3, we discuss related issues of corpus annotation further.
3. Computer corpora for sign linguistics research and grammar modelling A definition of corpus provided by Sinclair (1996) in the framework of the EAGLES project (see section 9 for website) runs as follows: “A corpus is a collection of pieces of language that are selected and ordered according to explicit linguistic criteria in order to be used as a sample of the language”. Atkins et al. (1991) differentiate a corpus from a generic library of electronic texts as a well-defined subset that is designed according to specific requirements to serve specific purposes. Furthermore, the definition of a computer corpus crucially states that “[a] computer corpus is a corpus which is encoded in a standardised and homogenous way for open-ended retrieval tasks […]” (Sinclair 1996).
1078
IX. Handling sign language data An electronic corpus is of the utmost importance for the creation of electronic resources (grammars and dictionaries) for any natural language. Some methodological and technical challenges are inherent to the nature of sign languages themselves. Languages without a written form (especially sign languages) lack even the most basic textual input for morphological and phrasal level analysis. So even at these levels, any leverage that corpora and statistical techniques may give is unavailable. The significance of sign language features has been characterised informally within sign language linguistics; however, more precise definitions and formulations of such phenomena are required in order to build computational models that can lead to computer-based facilities for deaf people. For that purpose, research needs to employ a number of data collection and evaluation techniques. Further, a substantial corpus is needed to drive automatic recognition and generation used as a target form to which synthetic signing should aspire. The synthetically generated signing can also be reviewed with the help of the corpus to determine if inadequacies result from grammatical formulation or from graphical realisation. Several groups have worked on digital sign language corpora (see section 9 for a list of websites), but most of them have focused on linguistic aspects rather than computational processing (see also section 3.1 on coding conventions and re-usability). These corpora are also either too small or too general for natural language processing tasks, and are therefore unsuitable for training a statistical system or fail to provide sufficiently fine-grained details for driving an avatar. While linguists try to obtain understanding of how signing is used (coarticulation, sentence boundaries, role shift, etc.), computer scientists are interested in data for driving an avatar or for automatic recognition of signing (i.e. data from tracking movements, facial expressions, timing, etc.). Examples for such linguistic or recognition-focussed corpora are Neidle (2002, 2007) for the former, and Bowden (2004) for the latter. For multi-lingual research and applications, parallel corpora are basic elements, as in the case of translation-memory applications and pattern-matching approaches to machine translation. However, parallel corpus collection for sign languages has so far been undertaken only on a small scale or for interpreters, and not for semi-spontaneous signing by native signers. Most available sign language corpora contain simple stories performed by a single signer. The non-written nature of sign language, as well as the risk of influence from written majority languages complicate the collection of a parallel corpus. The Dicta-Sign project (Efthimiou et al. 2009) intends to construct the first parallel corpus to support future sign language research. The corpus will inevitably cover a limited domain but will allow for linguistic comparison across sign languages, support for multilingual recognition and generation, and research into (shallow) translation between sign languages. The establishment of substantial electronic corpora from which the required information can be derived could significantly improve the productivity of sign language researchers as well. The standard approach for parallel corpora is to translate a source text available in one language into all the other languages and then align the resulting texts. For sign languages, however, this approach would lead to language use not considered natural by most signers. Instead, Dicta-Sign works with native or near-native signers interacting in pairs in different communication settings, thus coming as close as possible to natural
44. Computer modelling
1079
conversation (given the necessity to operate in a studio and domain restrictions). The approach taken is to design elicitation tasks that result in semantically close answers without predetermining the choice of vocabulary and grammar.
3.1. Annotation coding standards, conventions, and re-usability In 1998⫺2000, a survey of sign language resources worldwide conducted by the Intersign network (funded by the European Science Foundation) showed that most sign language corpora in existence at that time were small-scale and had been collected with a single purpose in mind. In many cases, the only property that sign language transcriptions had in common was that they used some sort of glosses. But even there, glossing conventions differed from research group to research group and over time also within groups, and coding of form was often limited to a bare minimum. Re-use of corpora was an exception, as was sharing the data with other institutions. No common coding standards were followed, nor had coding conventions been documented in every case. This situation did not allow the possibility of exchanging data between projects, thus preventing researchers from using others’ experiences, which implies that similar work had to be repeated for new projects, slowing down progress. Another problem emerging with the lack of standards is that consistency was not guaranteed even within corpus projects. Standardization would improve methodology in research and would ease collaboration between institutes and projects. Consistency in use of ID-glosses, tiers, and field values (see section 3.2) would make the use of a corpus more productive: the corpus would become machine-readable, that is, it would be searchable and automatically analysable.
3.2. Annotation tools and technical issues Annotation tools used for sign language corpora, such as AnCoLin (Braffort et al. 2004), Anvil (Kipp 2001), ELAN (Wittenburg et al. 2006), iLex (Hanke 2002), SignStream (Neidle 2001), and syncWRITER (Hanke 2001), define a temporal segmentation of a video and annotate time intervals in a multitude of tiers for transcription. Tiers usually hold text values, and linguistic modelling often only consists in restricting tags to take values from a user-defined list. Some tools go slightly beyond this basic structure by allowing complex values and database reference tags in addition to text tags (iLex for type/token matching), image tags and selection of poster frames from the video instead of text tags (syncWRITER), or relations between tiers (e.g. to exclude co-occurrence of a one-handed and a two-handed token in ELAN and others). Some tools support special input methods for notation system fonts (e.g. iLex for HamNoSys (Hanke 2004)). However, an equivalent to the different graphical representations of sound as found in spoken language tools is not available. Also, since current tools do not feature any kind of automation, the tagging process is completely manual. Manual annotation for long video sequences becomes error-prone and time-consuming, with the quality depending on the annotator’s knowledge and skills. The Dicta-Sign
1080
IX. Handling sign language data
project therefore proposes a way to integrate automatic video processing together with the annotator’s knowledge. Moreover, technological limitations of the annotation tools have often made it difficult to use data synchronised with video independent of the tools originally used (Hanke 2001). Where standard tools have been used, synchronisation with video was missing, making verification of the transcription very difficult. This situation has changed somewhat in the last years as sign language researchers have started to use more open tools with the greater availability of corpus tools for multimodal data. Some projects such as the EC-funded ECHO (2002⫺2004) and the US National Center for Sign Language and Gesture Resources at Boston University (1999⫺2002) have established corpora each with a common set of conventions. Tools such as iLex (Hanke 2002) specifically address data consistency issues caused by the lack of a writing system with a generally accepted orthography. The Nijmegen Metadata Workshop 2003 (Crasborn/Hanke 2003) defined common metadata standards for sign language corpora but to date few studies adhere to these. For most of the tools currently in use for sign language corpus collection, data exchange on a textual level is no longer a problem. The problem of missing coding conventions, however, is still a real one. One of the most widely used annotation tools is ELAN, which was originally created to annotate text for audio and video. Playing video files on a time line is typical in such programmes: the user assigns values to time segments. Annotations of various grammatical levels are linked to the time tokens. Annotations are grouped in tiers created by the user, which are layers of statistically analysable information represented in a hierarchical fashion. However, glosses are text strings just like any other annotation or commentary (see also chapter 43, Transcription). iLex is a transcription database for sign language combined with a lexical database. At the heart of transcribing with iLex is the type-token matching approach. The user identifies candidates from the type to be related to a token by (partial) glosses, form descriptions in HamNoSys, or meaning attributions. This method allows automatic production of a dictionary (by lemmatisation) within a reasonable time. It also supports consistent glossing by being linked to a lexical database that handles glosses as names of database entities. SignStream (see also chapter 43, Transcription) maintains a database consisting of a collection of utterances, each of which associates a segment of video with a finegrained multi-level transcription of that video. A database may incorporate utterances pointing to one or more movie files. SignStream allows the user to enter data in a variety of fields, such that the start and end points of each data item are aligned to specific frames in the associated video. A large set of fields and values is provided; however, the user may create new fields or values or edit the existing set. Data may be entered in one of several intuitive ways, including typing text, drawing lines, and selecting values from menus. It is possible to display up to four different synchronised video files, in separate windows, for each utterance. It is also possible to view distinct utterances (from one or more SignStream databases) on screen simultaneously. Anvil is a tool for the annotation of audio-visual material containing multimodal dialogue. The multiple layers are freely definable by inserting time-anchored elements with typed attribute-value pairs. Anvil is highly generic, platform-independent, XMLbased, and fitted with an intuitive graphical interface. For project integration, Anvil offers the import of speech transcription and export of text and table data for further
44. Computer modelling
1081
statistical processing. While not designed specifically to handle sign language, the capabilities for handling multimodal media makes it a suitable tool for some signing applications.
4. Sign capture and recognition Computer graphics research began in the 1980s for sign language. Sign capture and recognition work focussed initially on motion capture of manual signing (section 4.1). Later approaches analyse signing by using much less intrusive, video-based techniques. Current research addresses sign capture of non-manual aspects and the use of manual and non-manual information in combination (section 4.2).
4.1. Motion capture of manual signs Following a period of more active sign language research, Loomis, Poizner, and Bellugi (1983) introduced an interactive computer graphic system for analysis and modelling of sign language movement, which was able to extract grammatical information from changes in the movement and spatial contouring of the hands and arms. The recognised signs were presented by animating a ‘skeleton’ (see Section 5.1). The first multimedia sign language dictionary for American Sign Language (ASL) was proposed by Wilcox et al. (1994), using videos for sign language animations. Since a 2D image may be ambiguous, a preliminary 3D arm model for sign language animation was proposed by Gibet (1994), but her model did not have enough joints to be suitable for signing. In 2002, Ryan Patterson developed a simple glove which sensed hand movements and transmitted the data to a device that displayed the fingerspelled text on a screen. CyberGloves have a larger repertoire of sensors and are more practical for capturing the full range of signs (see section 9 for Patterson glove and CyberGlove websites). There is a range of motion capturing systems that have been applied to capture sign language, including complex systems with body suits, data-gloves, and headgear that allow for the collection of data on body movements, hand movements, and facial expressions. These systems can be intrusive and cumbersome to use but, after some postprocessing, provide reliable and accurate data on signing. The TESSA project (Cox et al. 2002) was based on this technology: the signer’s hand, mouth, and body movements were captured and stored, and the data were then used to animate the avatar when needed (see section 9 for website).
4.2. Video-based recognition 4.2.1. Manual aspects A less intrusive, but computationally much more challenging approach is to process images from video cameras to identify signs. Starner (1996) developed a camera-based system that required the signer to wear two different coloured gloves, but in later
1082
IX. Handling sign language data
versions no gloves were required. The image data processed by computer vision systems can take many forms, such as video sequences or views from multiple cameras. There has been extensive work on the recognition of one-handed fingerspelling (e.g. Bowden/Sahardi 2002; Lockton/Fitzgibbon 2002), although this is a small subset of the overall problem. For word-level sign recognition, the most successful methods to date have used devices such as data-gloves and electromagnetic/optical tracking, rather than monocular image sequences, and have achieved lexical sizes as high as 250 base signs. However, vision approaches to recognition have typically been limited to around 50 signs and even this has required a heavily constrained artificial grammar on the structure of the sentences (Starner/Pentland 1995; Vogler/Metaxas 1998). The application of statistical machine learning approaches based on Hidden Markov Models (HMMs) has been very successful in speech recognition research. Adopting a similar approach, much sign language recognition research is based on extracting vectors of relevant visual features from the image and attempting to fit HMMs (Starner/ Pentland 1995; Vogler/Metaxas 1998; Kraiss 2006). To cover the natural variation in events and effects of co-articulation, large amounts of data are required. These HMM approaches working on 20⫺50 signs typically required 40⫺100 individual training examples of each sign. An alternative approach based on classification using morphological features has achieved very high recognition rates on a 164 sign lexicon with as little as a single training example per sign. No artificial grammar was used in this approach, which has been applied to two European sign languages, British Sign Language (BSL) and Sign Language of the Netherlands (NGT) (Bowden et al. 2004). The classification architecture is centred around a linguistic model of a sign rather than a HMM. A symbolic description is based upon linguistically significant parameters for handshape, movement, orientation, and location, similar to components used in a HamNoSys description. In the Dicta-Sign project (Efthimiou et al. 2009), these techniques are extended to larger lexicon recognition, from isolated sign recognition to continuous sign recognition for four national sign languages, the aim being to improve the accuracy further through the addition of natural sign language grammar and linguistic knowledge. Crucially, the project will also take into account non-manual aspects of signing which have largely been ignored in earlier approaches to sign language recognition (see section 4.2.2). Although it is acceptable to use intrusive motion capture equipment where highly accurate sign capture is needed, video-based techniques are more appropriate for capture of signing by general users. Accurate analysis of signs depends on information about the 3D position of the arms and hands (depth information). While it is difficult to extract 3D information from monocular video input, the Kinect peripheral for the Microsoft Xbox 360 is a low-cost device that provides accurate realtime 3D information on the position of a user’s arms, though less information on handshape. Experimental sign recognition systems have been developed for a limited range of gestures, and it is to be expected that more comprehensive systems using Kinect will develop rapidly.
4.2.2. Non-manual aspects Early work on automatic facial expression recognition by Ekman, Friesen, and Hager (1978) introduced the Facial Action Coding System (FACS). FACS provided a proto-
44. Computer modelling
1083
type of the basic human expressions and allowed researchers to study facial expression based on an anatomical analysis of facial movements. A movement of one or more muscles in the face is called an action unit (AU), and all facial expressions can then be described by a combination of one or more of 44 AUs. Viola and Jones (2004) built a fast and reliable face detector using a ‘boosting’ technique that improves accuracy by tuning classifiers to deal better with difficult cases. Wang et al. (2004) extended this technique to facial expression recognition by building separate classifiers of features for each expression. Sign language is inherently multi-modal since information is conveyed through many articulators acting concurrently. In Dicta-Sign (Efthimiou et al. 2009), the combined use of manual aspects of signs (e.g. handshapes, movement), non-manual aspects (e.g. facial expressions, eye gaze, body motion), and possibly lip-reading is treated as a problem in fusion of multiple sign modalities. Extraction of 3D information is simplified by the use of binocular video cameras for data recording. In other pattern recognition applications, combination of multiple information sources has been shown to be beneficial, e.g. sign recognition (Winridge/Bowden 2004) and audio-visual speech recognition (Potamianos et al. 2003). The key observation is that combining complementary data sources leads to better recognition performance than is possible using the component sources alone (Kittler et al. 1998).
5. Automated signing (synthesis) With high-bandwidth broadband networks becoming widely available, it is practical to use video technology to display fixed sign language content. However, where sign sequences are prepared automatically by computer-based techniques, an alternative is to use 3D computer graphics technology and present signing through a virtual human character or ‘avatar’. The displayed signing can be based on the smoothed concatenation of motion captured data, as with Tessa (Cox et al. 2001), or can be synthesised from a representation in a sign language gesture notation (Kennaway 2002).
5.1. Virtual signing using 3D avatars The standard approach to avatar animation involves defining a ‘skeleton’ that closely copies the structure of the human skeleton, as in the H-Anim standard. A 3D ‘mesh’ encloses the skeleton and a ‘texture’ applied to the mesh gives the appearance of the skin and clothing of the character. Points on the mesh are associated with segments of the skeleton so that when the bones of the skeleton are moved and rotated, the mesh is distorted appropriately, giving the appearance of a naturally moving character. Expressions on the face are handled specially, using ‘morph targets’ which relocate points on the facial mesh so that the face takes on a target expression. By varying the offsets of the points from their location on a neutral face towards the location in the morph target, an expression can be made to appear and then fade away. Animation data for an avatar therefore takes the form of sets of parameters for the bones and facial morphs, for each frame or animation time-step. Animation data can
1084
IX. Handling sign language data
be derived from motion capture or conventional animation techniques involving posing the avatar skeleton by hand. For synthetic animation, the location of the hands is calculated relative to the body and the technique of inverse kinematics is used to compute the position of arms and elbow. For signing applications, a more detailed skeleton may be required, paying attention to the scope for articulating the hands, and good quality facial animation must be supported. The specification of an avatar will include information about key locations on the body used in signing (Jennings et al. 2010). A number of synthetic sign language animation systems have been developed over the past decade or so: ⫺ in the ViSiCAST and eSIGN projects at the University of East Anglia (Elliott et al. 2000, 2008; see section 9 for website); ⫺ at the Chinese Academy of Sciences, whose system also includes recognition technology (Chen et al. 2002, 2003); ⫺ in the DePaul University ASL Project (Sedgwick et al. 2001); ⫺ in the South African SASL-MT Project (Van Zijl/Combrink 2006); ⫺ for Japanese Sign Language (Morimoto et al. 2004); ⫺ the Thetos translation system for Polish Sign Language (Francik/Fabian 2002; Suszczanska et al. 2002). ⫺ VCom3D (Sims 2000) has for some years marketed Sign Smith Studio, a signing animation system, which was originally implemented in VRML (Virtual Reality Modelling Language) but now uses proprietary software. Applications include sign language instruction, educational materials, communication tools, and presentation of sign language on websites.
5.2. Computational techniques to generate signing with avatars Since natural sign language requires extensive parameterisation of base signs for location, direction of movement, and classifier handshapes, it is restricted to base synthesis on a fixed database of signs. One approach is to create a linguistic resource of signs via motion captured data collection and to use machine learning and computational techniques to model the movement and to produce natural looking sign language (Lu 2010). This approach echoes the use of sampled natural speech in the most successful speech synthesis systems for hearing people. The alternative is to develop a sign language grammar to support synthesis and visual realisation by a virtual human avatar given a phonetic-level description of the required sign sequence. Speech technology exploits phonological properties of spoken words to develop speech synthesis tools for unrestricted text input. In the case of sign languages, a similar approach is being experimented with, in order to generate signs not by mere video recording, but rather by composing phonological components of signs. During the production of synthesised sign phrases, morphemes with grammatical information may be generated in a cumulative way to parameterise a base sign (e.g. three-place predicate constructions) and/or simultaneously with base morphemes. In the latter case, they are articulated by means of non-manual signals, in parallel with
44. Computer modelling
1085
the structural head sign performed by the manual articulatory devices, resulting in a non-linear construction that conveys the intended linguistic message. Sign language synthesis is heavily dependent on (i) the natural language knowledge that is coded in a lexicon of annotated signs, and (ii) on a set of rules that allows structuring of core grammar phenomena, making extensive use of feature properties and structuring options. This is necessary in order to guarantee the linguistic adequacy of the signing performed. Computer models require precise formulation of language characteristics, which current sign language linguistics often does not provide. One of the main objectives is a model that can be used to analyse and generate natural signing. But with signing, it is difficult to verify that our notations and descriptions are adequate ⫺ hence the value of an animation system to verify transcriptions and synthesised signing, confirming (or not) that they capture the essence of sign. In the following sections, we briefly discuss some examples for modelling sign language lexicon and grammar which support synthetic generation and visual realisation by avatars. We describe one model in more detail and in section 5.4.2, we provide an example of the generation process.
5.3. Modelling the lexicon In section 2, we mentioned that sign search in dictionaries might become even more user-friendly with the help of computer technologies. However, such dictionaries are not fine-grained enough for synthetic generation of signing by an avatar; therefore, more formal description of data is required for building machine-readable lexicons. Modelling of the lexicon is influenced by the choice of the phonetic description model. Filhol (2008) challenges traditional, so-called parametric approaches as they cannot address underspecification, overspecification, and iconicity of the sign. In contrast to traditional systems, like Stokoe’s (1976) system and HamNoSys (Prillwitz et al. 1989), he suggests a temporal representation based on Liddell and Johnson’s (1989) descriptions, which uses the three traditional manual parameters (handshape, movement, location) but defines timing units in which those parameters hold. Speers’ (2001) work is also based on that theory. The ongoing Dicta-Sign project (Efthimiou et al. 2009) is looking to extend the HamNoSys/SiGML system with such temporal units. In the following, we will discuss with the help of an example how a lexicon for machine use can be constructed for synthetic sign generation purposes. The ViSiCAST HPSG (Head-driven Phrase Structure Grammar; Pollard/Sag 1994) feature structure, which is based on types and type hierarchies (Marshall/Safar 2004), is an example of an approach using such a (parametric) lexicon (see section 5.4 for details on HPSG). The type word is the feature structure for an individual sign, and is subclassified as verb, noun, or adjective. Verb is further subclassified to distinguish fixed, directional (parameterised by start/end positions), and manipulative (parameterised by a proform classifier handshape) verbs. Combinations of these types are permitted, for example ‘take’ is a directional manipulative verb (see example (2) below). Such a lexicon is aimed at fine-grained details that contain all the information about the entries for generation above word level (and possibly could contribute to the analysis of continuous signing as well).
1086
IX. Handling sign language data The left hand side (LHS) of a HPSG entry (left to the arrow in examples (1) and (2)) is either a list of HamNoSys symbol names (in the (a)-examples) or of HamNoSys transcription symbols (in the (b)-examples) for manuals and non-manuals instead of a word. Part of the entry is the phonetic transcription of the mouthing, for which SAMPA, a machine-readable phonetic alphabet, was applied (for instance, in (1a), the SAMPA symbol “{“ corresponds to the IPA symbol “æ”}. On the right hand side of the arrow (RHS), the grammatical information, which will be described in more detail below, is found. (1)
(2)
a.
The entry have with HamNoSys symbol names of SiGML (Signing Gesture Markup Language; Elliott et al. 2010), which is used in the lexicon:
b.
The same with HamNoSys transcription symbols:
a.
The entry take with SiGML names:
b.
The same with HamNoSys symbols:
In example (2), the type of the sign (i.e. directional and capable of incorporating classifiers) is reflected by the fact that many symbols are left uninstantiated, which are represented as placeholders for SiGML symbol names or HamNoSys symbols beginning with capital letters in (2ab). This contrasts with the representation of a fixed sign, as, for instance, the sign have in example (1), where no placeholders for SiGML symbol names or HamNoSys symbols are used. On the right hand side (RHS), the uninstantiated values of the phonetic (PHON) features in the HPSG feature structure are instantiated and propagated to the LHS (for example, the handshape (Hsh) symbol in (2)) via unification and principles. In this way, a dynamic lexicon has been created. The HPSG feature structure starts with the standard PHON (phonetic; Figure 44.1), SYN (syntactic; Figure 44.3 in section 5.4),
44. Computer modelling
1087
Fig. 44.1: The PHON features of the verb take
Fig. 44.2: The SEM features of the verb take
and SEM (semantic; Figure 44.2) components common to HPSG. In the following, we discuss these three components for the verb take given in (2). The PHON component describes how the signs are formed by handshape, palm orientation, extended finger direction (Efd), location, and movement using the HamNoSys conventions. As for the non-manuals, the eye-brow movement and mouth-picture were implemented (PHON:FACE:BROW and PHON:MOUTH:PICT); see Figure 44.1. The SYN component determines the argument structure and the conditions for unification (see Figure 44.3). It contains information on what classifiers the word can take (the classifier features are associated with the complements (SYN:HEAD:AGR)
1088
IX. Handling sign language data
and their values are propagated to the PHON structure of the verb in the unification process). It also contains information on how pluralisation can be realised, and on mode, which is associated with sentence type and pro(noun) drop. The context feature is used to locate entities in the three-dimensional signing space. These positions are used for referencing and for directional verbs, where such positions are obligatory morphemes. This feature is propagated through derivation. Movement of objects in signing space and thus the maintenance of the CONTEXT feature is achieved by associating an ADD_LIST and a DELETE_LIST with directional verbs (Safar/Marshall 2002). For more details on these lists, see also section 5.4.2. The SEM structure includes semantic roles with WordNet definitions for sense to avoid potential ambiguity in the English gloss (Figure 44.2).
5.4. Modelling the grammar Computer models of grammar often favour lexicalist approaches, which are appropriate for sign languages which display less variation in their grammars than in their lexicons. Efthimiou et al. (2006) and Fotinea et al. (2008) use HamNoSys (Prillwitz et al. 1989) input to produce representations of natural signing. The adopted theoretical analysis follows a lexicalist approach where development of the grammar module involves a set of rules which can handle sign phrase generation as regards the basic verb categories and their complements, as well as extended nominal formations. The generation system in Speers (2001) is implemented as a Lexical-Functional Grammar (LFG) correspondence architecture, as in Kaplan et al. (1989), and uses empty features in Move-Hold notations of lexical forms (Liddell/Johnson 1989), which are instantiated with spatial data during generation. The framework chosen by ViSiCAST for sign language modelling was HPSG, a unification-based grammar. Differences in HPSG are encoded in the lexicon, while grammar rules are usually shared with occasional variation in semantic principles. A further consideration in favouring HPSG is that the feature structures can incorporate modality-specific aspects (e.g. non-manual features) of signs appropriately (Safar/Marshall 2002). Results of translation were expressed in HamNoSys. The back-end of the system was further enhanced during the eSIGN project with significant improvements to the quality and precision of the manual signing, near-complete coverage of the manual features of HamNoSys 4, an extensible framework for non-manual features, and a framework for the support for multiple avatars (Elliott et al. 2004, 2005, 2008). In sign language research, HPSG has not been greatly used (Cormier et al. 1999). In fact, many sign languages display certain characteristics that are problematic for HPSG, for example, use of pro-drop and verb-final word order. It is therefore not surprising that many of the rules found in the HPSG literature do not apply to sign languages, and need to be extended or replaced. The principles behind these rules, however, remain intact. Using HPSG as an example, we will show how parameterisation works to generate signing from phonetic level descriptions. The rules in the grammar deal with sign order of (pre-/post-)modifiers (adjuncts) and (pre-/post-)complements. In the following, we will first introduce the HPSG principles of grammar, before providing an example of parameterisation.
44. Computer modelling
1089
5.4.1. Principles of grammar HPSG grammar rules define (i) what lexical items can be combined to form larger phrases and (ii) in what order they can be combined. Grammaticality, however, is determined by the interaction between the lexicon and principles. This interaction specifies general well-formedness. The principles can be stated as constraints on the types in the lexicon. Below, we list the principles which have been implemented in the grammar so far. ⫺ Mode The principle of MODE propagates the non-manual value for eye-brow movement (neutral, furrowed, raised), which is associated with the sentence type in the input (declarative, yes-no question, or wh-question). ⫺ Pro-drop The second type of principle deals with pro-drop, that is, the non-overt realisation of pronouns. For handling pro-drop, an empty lexical entry was introduced. The principle checks the semantic head for the values of subject and object pro-drop features. Figure 44.3 shows SYN:HEAD:PRODRP_OBJ and SYN:HEAD:PRODRP_SUBJ features for all three persons where in each case, three values are possible: can, can’t, and must. We then extract the syntactic information for the empty lexical item, which has to be unified with the complement information of the verb. If the value is can’t, then pro-drop is not possible, in case of can, we generate both solutions. ⫺ Plurals The third type of principle controls the generation of plurals, although still in a somewhat overgeneralised way. The principle handles repeatable nouns, non-repeatable nouns with external quantifiers, and plural verbs (for details on pluralisation, see chapter 6). The input contains the semantic information which is needed to generate plurals and which results from the analysis of the spoken language sentence. Across sign languages, distributive and collective meanings of plurals are often expressed differently, so the semantic input also has to specify that information. English, for example, is often underspecified in this respect; therefore, in some cases, human intervention is required in the analysis stage. The lexical item determines whether it allows repetition (reduplication) or sweeping movement. The SYN feature thus contains the ALLOW_PL_REPEAT and the ALLOW_PL_SWEEP features (according to this model, a sweeping movement indicates the collective involvement of a whole group, while repetition adds a distributive meaning). When the feature’s value is yes in either case, then the MOV (movement) feature in PHON is instantiated to the appropriate HamNoSys symbol expressing repetition or sweeping motion in agreement with the SEM:COUNT:COLLORDIST feature value. Pluralisation of verbs is handled similarly. For more on plurality, its issues, and relation to signing space, see Marshall and Safar (2005). ⫺ Signing Space The fourth type of principle concerns the management of the signing space. Due to the visual nature of sign languages, referents can be located and moved in the 3D
1090
Fig. 44.3: The SYN features of the verb take
IX. Handling sign language data
44. Computer modelling
1091
Fig. 44.3A: The CONTEXT feature within SYN
signing space (see chapter 19, Use of Sign Space, for details). Once a location in space is established, it can be targeted by a pointing sign (anaphoric relationship), and it can define the starting or end point of a directional verb, which can be obtained by propagating a map of sign space positions through derivation. The missing location phonemes are available in the SYN:HEAD:CONTEXT feature. Verb arguments are distributed over different positions in the signing space. If the verb involves the movement of a referent, then it will be deleted from the ‘old’ position and added to a ‘new’ position. Figure 44.3A, which is part of the SYN structure depicted in Figure 44.3, shows the CONTEXT feature with an ADD_LIST and a DELETE_LIST. These lists control the changes to the map. The CONTEXT_IN and CONTEXT_OUT features are the initial input and the changed output lists of the map. The map is threaded through the generation process. The final CONTEXT_OUT will be the input for the next sentence.
5.4.2. An example of parameterisation We now discuss an example for a lexical entry that has uninstantiated values on the RHS in the PHON structure. Consequently, the LHS HamNoSys representation needs to be parameterised as well (for details, see Marshall/Safar 2004, 2005). In the above example (2) for the entry take, the LHS contains only the HamNoSys structure that specifies take as a directional classifier verb. The handshape (Hsh), the extended finger direction (Efd), and the palm orientation (Plm) are initially uninstantiated and are resolved when the object complement is processed. The object complement, a noun, has the SYN:HEAD:AGR:CL feature, which contains information on the different classifier possibilities associated with that noun. Example (3) is a macro, that is, a pattern that shows a mapping to another sequence, which is basically a shortcut to a more complex sequence. (3)
The nmanip macro for a noun like mug: upright_cylindermacro cl_ndh:hns_string,
1092
IX. Handling sign language data cl_const:hns_string, cl_hsh:[hamceeall], cl_ori:(plm:[hampalml], efd:Efd).
In the unification process, this information is available for the verb and therefore, its PHON features can be instantiated and propagated to the LHS. Example (4) shows the SYN:PRECOMPS feature with a macro as it is used in the lexical entry of take (‘@’ stands for a macro below, in this example, the expansion of the nmanip macro is example (3)). Figure 44.4 represents the same information as an attribute value matrix (AVM) which is part of Figure 44.3: (4)
syn:precomps: [(@nmanip(Ph, Gloss, Index2, Precomp1, Hsh, Efd, Plm, Sg)), (@np2(W, Glosssubj, Plm2, EfdT, Index1, Precomp2, Num, PLdistr))]
Therefore, if the complement is mug like in our example (3), Hsh and Plm are instantiated to [hamceeall] and [hampalml], respectively. The complements are also added to the allocation map (signing space). The allocation map is available for the verb as well which governs the allocation and deletion of places in the map (see SYN:HEAD:CONTEXT feature in Figure 44.3). CONTEXT_IN has all the available and occupied places in signing space. CONTEXT_OUT will be the modified list with new referents and also with the newly available positions as a result of movement (the starting point of movement is the original position of mug, which becomes free after moving it). Therefore, the locations for the start and end position (and potentially the Efd) can be instantiated in PHON of the verb and propagated to the LHS. Heightobj and Distobj stand for the location of the object, which, in the case of take, is the starting point of the sign. Heigthsubj and Distsubj stand for the end point of the movement, which is the location of the subject in signing space. The Brow value is associated with the sentence type in the input and is propagated throughout. R1 (see examples (1) and (2) above) is the placeholder for the sweeping motion of the plural collective reading. R2 stands for the repetition of the movement for a distributive meaning. The verb’s SYN:HEAD:AGR:NUM:COLLORDIST feature is unified with the SEM:COUNT feature values. If the SYN:ALLOW_PL_SWEEP or the SYN:ALLOW_PL_REPEAT features permit, then R1 or R2 can be instantiated ac-
Fig. 44.4: The PRECOMPS feature of the verb take
44. Computer modelling
1093
cording to the semantics. If the semantic input specifies the singular, R1 and R2 remain uninstantiated and are ignored in the HamNoSys output. This linguistic analysis can then be linked with the animation technology by encoding the result in XML as SiGML. This is then sent to the JASigning animation system (Elliott et al. 2010).
6. Machine translation and animation Within sign language machine translation (MT), we differentiate two categories: scripting and MT software (Huenerfauth/Lu 2012). In scripting, the user chooses signs from an animation dictionary and places them on a timeline. This is then synthesized with an avatar. An example is the eSIGN project, which allows the user to build sign databases and scripts of sentences and to view the resulting animations. Another example is Sign Smith Studio from VCom3D (Sims/Silverglate 2002), a commercial software system for scripting ASL animations with a fingerspelling generator and some nonmanual components (see section 9 for eSIGN and Sign Smith Studio websites). The scripting software requires a user who knows the sign language in use. Signs can be created through motion capture or by using standard computer graphics tools to produce fixed gestures. These can then be combined into sequences and presented via avatars, using computer graphics techniques to blend smoothly between signs. Simple MT systems have been built based on such a fixed database of signs. Textto-sign-language translations systems like VCom3D (Sims/Silverglate 2002) and Simon (Elliott et al. 2000) present textual information as Signed English (SE) or Sign Supported English (SSE): SE uses signs in English word order and follows English grammar, while in SSE, only key words of a sentence are signed. The Tessa system (Cox et al. 2002) translates from speech to BSL by recognising whole phrases and mapping them to natural BSL using a domain-specific template-based grammar. True MT for sign language involves a higher level translation, where a sign language sentence is automatically produced from a spoken language sentence (usually a written sentence). The translation is decomposed into two major stages. First, the English text is manipulated into an intermediate (transfer or interlingua) representation (for explanation see below). For the second stage, sign generation, a language model (for grammar and lexicon; see section 5) is used to construct the sign sequence, including nonmanual components, from the intermediate representation. The resulting symbols are then animated by an avatar. MT systems have been designed for several sign languages; however, we only mention the different types of approaches here. Some MT systems only translate a few sample input phrases (Zhao et al. 2000), others are more developed rule-based systems (Marshall/Safar 2005), and there are some statistical systems (Stein/Bungeroth/Ney 2006). The above-mentioned rule-based (ViSiCAST) system is a multilingual sign translation system designed to translate from English text into a variety of national sign languages (e.g. NGT, BSL, and German Sign Language (DGS)). English written text is first analysed by CMU’s (Carnegie Mellon University) link grammar parser (Sleator/ Temperley 1991) and a pronoun resolution module based on the Kennedy and Boguraev (1996) algorithm. The output of the parser is then processed using λ-calculus,
1094
IX. Handling sign language data
β-reduction, and Discourse Representation Structure (DRS) merging (Blackburn/Bos 2005). The result is a DRS (Kamp/Reyle 1993) modified to achieve a more sign language oriented representation that subsequently supports an easier mapping into a sign language grammar. This is called the interlingual approach, when the result is the semantic expression of a sentence (specifying how the concepts in a sentence relate to each other). In transfer systems, by contrast, the intermediate representation of a syntactic structure is usually the same as the analysed structure of input sentences. In order to be translated, this structure of the source language has to be transferred to the target language structure, which can be computationally expensive if several languages are involved. Since the aim in ViSiCAST was that the system be adaptable for several language pairs, a relatively language-independent meaning representation was required. The chosen interlingua system had an advantage over the interlingua approach of the Zardoz system (Veale et al. 1998), since the DRS-based semantic approach is highly modular, allowing the development of a grammar for the target sign language which is independent of the source language. By building this intermediate semantic representation, the DRSs, the first major stage of the translation was finished. The second major stage was the translation from the DRS representation into graphically oriented representations which can drive a virtual avatar. This sequence, which is generated via HPSG, consists of HamNoSys for manual features and codes for non-manual features. This linguistic analysis encoded as SiGML can then be linked with the JASigning animation system (as mentioned in section 5.4.2). Because the JASigning system supports almost the full range of HamNoSys, the MT system can animate an arbitrary number of specialised versions of signs, rather than relying on a predefined animation library. Speers (2001) (see also section 5.4) is also a translation system, which is implemented as an LFG correspondence architecture (Kaplan/Bresnan 1982; Kaplan et al. 1989). As for the conversion, three types of structural representations are assumed: (i) f-structure (grammatical relations in the sentence = functional structure); (ii) c-structure (phrase-structure tree = constituent structure); (iii) p-structure (phonetic representation level, where spatial and non-manual variations are revealed) Correspondence functions are defined that first convert an English f-structure into an ASL f-structure, subsequently build an ASL c-structure from the f-structure, and finally build the p-structure from the c-structure. However, the current output file created by the ASL generation system is only viewable within this system. “Because of this, the data is only useful to someone who understands ASL syntax in the manner presented here, and the phonetic notation of the Move-Hold model. In order to be more generally useful several different software applications could be developed to render the data in a variety of formats.” (Speers 2001, 83).
7. Social challenges of automated signing The social challenges involve misunderstandings, in both the hearing and deaf communities, about research on automated signing. Hearing people often have misconceptions and
44. Computer modelling
1095
little social awareness about sign languages. The recognition and acceptance of a sign language as an official minority language is vital to deaf people, but recognition alone will not help users if there are insufficient interpreters and interpretation and communication services available. Clearly, there is a need to increase the number of qualified interpreters but, conversely, there is also a need to seek alternative opportunities to improve everyday communication between deaf and hearing people and to apply modern technology to serve the needs of deaf people. Computer animated ‘virtual human’ technology, the graphic quality of which is improving while its costs are decreasing, has the potential to help. However, in the deaf community, there is often a fear that the hard won recognition of their sign language will result in moves to make machines take over the role of human interpreters. The image of translation systems has been that they offer a ‘solution’ to translation needs. In fact, however, they can only be regarded as ‘useful aids’. Generated signing or MT quality cannot achieve the quality of human translation or natural human signing. Yet, the quality of animated signing available at present is not the end point of development of this approach. Hutchins (1999) stresses the importance of educating consumers about the achievable quality of MT. It is important that the consumers are informed about the realistic expectations of automated systems. The limitations are broadly understood in the hearing community where MT and text-to-speech systems have a place for certain applications such as ad-hoc translation of Web information and automated travel announcements, but do not approach the capabilities of human language users. Automated signing systems involve an even bigger challenge to natural language processing than systems for oral languages, because of the different nature of sign language and, in addition, the lack of an accepted written form. Therefore, it is important that deaf communities are informed about what can be expected and that in the medium term, these techniques do not challenge the provision of human interpreters.
8. Conclusion Sign languages as the main communication means of the deaf were not always accepted as true languages. Recognition of sign languages began in the past 50 years, once it was realised that sign languages have complex and distinctive phonological and syntactic structures. Advances in sign language linguistics were slowly followed by research in computational sign language processing. In the last few years, computational research has become increasingly active. Numerous applications were developed, most of which, unfortunately, are not fully mature systems for analysis, recognition, or synthesis. This is because signing includes a high level of simultaneous action which increases the complexity of modelling grammatical processes. There are two barriers to overcome in order to address these difficulties. On the one hand, sign language linguists still need to find answers to questions concerning grammatical phenomena in order to build computational models which require a high level of detail (to drive avatars, for example). On the other hand, additional problems result from the fact that computers have difficulties, for example, in extracting reliable information on the hands and the face from video images. Today’s automatic sign language recognition has reached the stage where speech recognition was 20 years ago. Given increased activity in recent years, the future looks bright for sign language processing. Once anyone with a camera (or Kinect device)
1096
IX. Handling sign language data
and an internet connection could use natural signing to interact with a computer application or other (hearing or deaf) users, the possibilities would be endless. Until the above-mentioned goals are achieved, research would benefit from the automation of aspects of the transcription process, which will provide greater efficiency and accuracy. The use of machine vision algorithms could assist linguists in many aspects of transcription. In particular, such algorithms could increase the speed of the fine-grained transcription of visual language data, thus further accelerating linguistic and computer science research on sign language and gesture. Although there is still a distance to go in this field, sign language users can already benefit from the intermediate results of the research, which have produced useful applications such as multilanguage-multimedia dictionaries and teaching materials.
9. Literature and web resources Atkins, Sue/Clear, Jeremy/Ostler, Nicholas 1991 Corpus Design Criteria. In: Literary and Linguistic Computing 7, 1⫺16. Blackburn, Patrick/Bos, Johan 2005 Representation and Inference for Natural Language. A First Course in Computational Semantics. Stanford, CA: CSLI Publications. Boguraev, Bran/Briscoe, Ted (eds.) 1989 Computational Lexicography for Natural Language Processing. London: Longman. Bowden, Richard/Sarhadi, Mansoor 2002 A Non-linear Model of Shape and Motion for Tracking Fingerspelt American Sign Language. In: Image and Vision Computing 20(9⫺10), 597⫺607. Bowden, Richard/Windridge, David/Kadir, Timor/Zisserman, Andrew/Brady, Michael 2004 A Linguistic Feature Vector for the Visual Interpretation of Sign Language. In: European Conference on Computer Vision 1, 390⫺401. Braffort, A./Choisier, A./Collet, C./Dalle, P./Gianni, F./Lenseigne, B./Segouat, J. 2004 Toward an Annotation Software for Video of Sign Language, Including Image Processing Tools and Signing Space Modelling. In: Actes de LREC 2004, 201⫺203. Buttussi, Fabio/Chittaro, Luca/Coppo, Marco 2007 Using Web3D Technologies for Visualization and Search of Signs in an International Sign Language Dictionary. In: Proceedings of Web3D 2007: 12 th International Conference on 3D Web Technology. New York, NY: ACM Press, 61⫺70. Carpenter, Bob/Penn, Gerald 1999 The Attribute Logic Engine. User’s Guide (Version 3.2 Beta). Bell Labs. Chen, Yiqiang/Gao, Wen/Fang, Gaolin/Wang, Zhaoqi/Yang, Changshui/Jiang, Dalong 2002 Text to Avatar in Multi-modal Human Computer Interface. In: Proceedings of AsiaPacific CHI (APCHI2002), 636⫺643. Chen, Yiqiang/Gao, Wen/Fang, Gaolin/Wang, Zhaoqi 2003 CSLDS: Chinese Sign Language Dialog System. In: Proceedings of IEEE International Workshop on Analysis and Modeling of Faces and Gestures (AMFG ’03). Nice, France, 236⫺238. Cormier, Kearsy/Wechsler, Stephen/Meier, Richard P. 1999 Locus Agreement in American Sign Language. In: Webelhuth, Gert/Koenig, JeanPierre/Kathol, Andreas (eds.), Lexical and Constructional Aspects of Linguistic Explanation. Chicago, IL: University of Chicago Press, 215⫺229.
44. Computer modelling
1097
Cox, Stephen J./Lincoln, Michael/Tryggvason, Judy/Nakisa, Melanie/Wells, Mark/Tutt, Marcus/ Abbott, Sanja 2002 TESSA, a System to Aid Communication with Deaf People. In: ASSETS 2002: Proceedings of the 5 thInternational ACM SIGCAPH Conference on Assistive Technologies, Edinburgh, 205⫺212. Crasborn, Onno/Hanke, Thomas 2003 Metadata for Sign Language Corpora. [Available on-line at: www.let.ru.nl/sign-lang/ echo/docs/ECHO_Metadata_SL.pdf] Cuxac, Christian 2003 Une Langue moins Marquée comme Analyseur Langagier: l’Exemple de la LSF. In: Nouvelle Revue de l’AIS (Adaptation et Intégration Scolaires) 23, 19⫺30. Efthimiou, Eleni/Fotinea, Stavroula-Evita/Sapountzaki, Galini 2006 E-accessibility to Educational Content for the Deaf. In: EURODL, 2006/II. [Electronically available since 15. 12. 06 at: http://www.eurodl.org/materials/contrib/2006/Eleni_ Efthimiou.htm] Efthimiou, Eleni/Fotinea, Stavroula-Evita/Vogler, Christian/Hanke, Thomas/Glauert, John R.W./ Bowden, Richard/Braffort, Annelies/Collet, Christophe/Maragos, Petros/Segouat, Jérémie 2009 Sign Language Recognition, Generation and Modelling: A Research Effort with Applications in Deaf Communication. In: Stephanidis, Constantine (ed.), Universal Access in HCI, Part I, HCII 2009 (LNCS 5614). Berlin: Springer, 21⫺30. Ekman, Paul/Friesen, Wallace/Hager, Joseph 1978 Facial Action Coding System. Palo Alto, CA: Consulting Psychologist Press. Elliott, Ralph/Glauert, John R.W./Kennaway, Richard/Marshall, Ian 2000 The Development of Language Processing Support for the ViSiCAST Project. In: ASSETS 2000: Proceedings of the 4 th International ACM SIGCAPH Conference on Assistive Technologies, New York, 101⫺108. Elliott, Ralph/Glauert, John R. W./Jennings, Vince/Kennaway, Richard 2004 An Overview of the SiGML Notation and SiGMLSigning Software System. In: Streiter, Oliver/Vettori, Chiara (eds.), Workshop on Representing and Processing of Sign Languages, LREC 2004, Lisbon, Portugal. Paris: ELRA, 98⫺104. Elliott, Ralph/Glauert, John R. W./Kennaway, Richard 2005 Developing Techniques to Support Scripted Sign Language Performance by a Virtual Human. In: Proceedings HCII 2005, 11th International Conference on Human Computer Interaction (CD-ROM), Las Vegas. Elliott, Ralph/Glauert, John R. W./Kennaway, Richard/Marshall, Ian/Sáfár, Eva 2008 Linguistic Modelling and Language-processing Technologies for Avatar-based Sign Language Presentation. In: Efthimiou, Eleni/Fotinea, Stavroula-Evita/Glauert, John (eds.), Emerging Technologies for Deaf Accessibility in the Information Society (Special Issue of Universal Access in the Information Society 6(4)), 375⫺391. Filhol, Michael 2008 Modèle Descriptif des Signes pour un Traitement Automatique des Langues des Signes. PhD Dissertation, Universite Paris-11 (Paris Sud), Orsay. Fotinea, Stavroula-Evita/Efthimiou, Eleni/Karpouzis, Kostas/Caridakis, George 2008 A Knowledge-based Sign Synthesis Architecture. In: Efthimiou, Eleni/Fotinea, Stavroula-Evita/Glauert, John (eds.), Emerging Technologies for Deaf Accessibility in the Information Society (Special Issue of Universal Access in the Information Society 6(4)), 405⫺418. Francik, Jarosław/Fabian, Piotr 2002 Animating Sign Language in the Real Time. In: Proceedings of the 20 th IASTED International Multi-Conference Applied Informatics, Innsbruck, Austria, 276⫺281. Gibet, Sylvie 1994 Synthesis of Sign Language Gestures. In: CHI ’94: Conference Companion on Human Factors in Computing Systems. New York, NY: ACM Press, 311⫺312.
1098
IX. Handling sign language data
Handke, Jürgen 1995 The Structure of the Lexicon. Human Versus Machine. Berlin: Mouton de Gruyter. Hanke, Thomas 2001 Sign Language Transcription with syncWRITER. In: Sign Language & Linguistics 4(1/2), 275⫺283. Hanke, Thomas 2002 iLex ⫺ A Tool for Sign Language Lexicography and Corpus Analysis. In: Proceedings of the 3rd International Conference on Language Resources and Evaluation, Las Palmas de Gran Canaria, Spain. Paris: ELRA, 923⫺926. Hanke, Thomas 2004 HamNoSys ⫺ Representing Sign Language Data in Language Resources and Language Processing Contexts. In: Streiter, Oliver/Vettori, Chiara (eds.), Workshop on Representing and Processing of Sign Languages, LREC 2004, Lisbon, Portugal. Paris: ELRA, 1⫺6. Huenerfauth, Matt/Lu, Pengfei 2012 Effect of Spatial Reference and Verb Inflection on the Usability of American Sign Language Animations. In: Universal Access in the Information Society. Berlin: Springer. [online: http://www.springerlink.com/content/y4v31162t4341462/] Hutchins, John 1999 Retrospect and Prospect in Computer-based Translation. In: Proceedings of Machine Translation Summit VII, September 1999, Kent Ridge Digital Labs, Singapore. Tokyo: Asia-Pacific Association for Machine Translation, 30⫺34. Jennings, Vince/Kennaway, J. Richard/Glauert, John R. W./Elliott, Ralph 2010 Requirements for a Signing Avatar. In: Hanke, Thomas (ed.), 4 th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies. Valletta, Malta, 22⫺23 May 2010, 133⫺136. Kamp, Hans/Reyle, Uwe 1993 From Discourse to Logic. Introduction to Model-theoretic Semantics of Natural Language, Formal Logic, and Discourse Representation Theory. Dordrecht: Kluwer. Kaplan, Ronald M. 1989 The Formal Architecture of Lexical-Functional Grammar. In: Journal of Information Science and Engineering 5, 305⫺322. Kaplan, Ronald M./Bresnan, Joan 1982 Lexical-Functional Grammar: A Formal System for Grammatical Representation. In: Bresnan, Joan (ed.), The Mental Representation of Grammatical Relations. Cambridge, MA: MIT Press, 173⫺281. Karpouzis, Kostas/Caridakis, George/Fotinea, Stavroula-Evita/Efthimiou, Eleni 2007 Educational Resources and Implementation of a Greek Sign Language Synthesis Architecture. In: Computers & Education 49(1), 54⫺74. Kennaway, J. Richard 2002 Synthetic Animation of Deaf Signing Gestures. In: Wachsmuth, Ipke/Sowa, Timo (eds.), Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction. London: Springer, 146⫺157. Kennedy, Christopher/Boguraev, Branimir 1996 Anaphora for Everyone: Pronominal Anaphora Resolution Without a Parser. In: Proceedings of the 16 th International Conference on Computational Linguistics COLING ’96, Copenhagen, 113⫺118. Kipp, Michael 2001 Anvil ⫺ A Generic Annotation Tool for Multimodal Dialogue. In: Proceedings of the 7 th European Conference on Speech Communication and Technology (Eurospeech), 1367⫺1370. Kittler, Josef/Hatef, Mohamad/Duin, Robert P. W./Matas, Jiri 1998 On Combining Classifiers, In: IEEE Transactions on Pattern Analysis and Machine Intelligence 20(3), 226⫺239.
44. Computer modelling
1099
Kraiss, Karl-Friedrich (ed.) 2006 Advanced Man-Machine Interaction. Fundamentals and Implementation. Berlin: Springer. Liddell, Scott K./Johnson, Robert E. 1989 American Sign Language: The Phonological Base. In: Sign Language Studies 64, 195⫺ 277. Lockton, Raymond/Fitzgibbon, Andrew W. 2002 Real-time Gesture Recognition Using Deterministic Boosting. In: Proceedings of British Machine Vision Conference. Loomis, Jeffrey/Poizner, Howard/Bellugi, Ursula 1983 Computer Graphics Modeling of American Sign Language. In: Computer Graphics 17(3), 105⫺114. Lu, Pengfei 2010 Modeling Animations of American Sign Language Verbs through Motion-Capture of Native ASL Signers. In: SIGACCESS Newsletter 96, 41⫺45. Marshall, Ian/Sáfár, Eva 2004 Sign Language Generation in an ALE HPSG. In: Müller, Stefan (ed.), Proceedings of the 11th International Conference on Head-driven Phrase Structure Grammar (HPSG2004), 189⫺201. Marshall, Ian/Sáfár, Eva 2005 Grammar Development for Sign Language Avatar-based Synthesis. In: Stephanidis, Constantine (ed.), Universal Access in HCI: Exploring New Dimensions of Diversity (Vol. 8 of the Proceedings of the 11th International Conference on Human-Computer Interaction). CD-ROM. Mahwah, NJ: Lawrence Erlbaum. Morimoto, Kazunari/Kurokawa, Takao/Isobe, Norifumi/Miyashita, Junichi 2004 Design of Computer Animation of Japanese Sign Language for Hearing-Impaired People in Stomach X-Ray Inspection. In: Miesenberger, Klaus/Klaus, Joachim/Zagler, Wolfgang/Burger, Dominique (eds.), Computers Helping People with Special Needs (Lecture Notes in Computer Science 3118). Berlin: Springer, 1114⫺1120. Neidle, Carol 2001 SignStreamTM: A Database Tool for Research on Visual-gestural Language. In: Sign Language & Linguistics 4(1/2), 203⫺214. Neidle, Carol 2002, 2007 SignStream Annotation: Conventions Used for the American Sign Language Linguistic Research Project and Addendum. Technical Report 11 & 13. American Sign Language Linguistic Research Project, Boston University. Ooi, Vincent B. Y. 1998 Computer Corpus Lexicography. Edinburgh: Edinburgh University Press. Pollard, Carl/Sag, Ivan A. 1994 Head-driven Phrase Structure Grammar. Chicago, IL: The University of Chicago Press. Potamianos, Gerasimos/Neti, Chalapathy/Gravier, Guillaume/Garg, Ashutosh/Senior, Andrew W. 2003 Recent Advances in the Automatic Recognition of Audio-visual Speech. In: Proceedings of IEEE 91(9), 1306⫺1326. Prillwitz, Siegmund/Leven, Regina/Zienert, Heiko/Hanke, Thomas/Henning, Jan 1989 HamNoSys Version 2.0: Hamburg Notation System for Sign Languages ⫺ an Introductory Guide. Hamburg: Signum. Sáfár, Eva/Marshall, Ian 2002 Sign Language Translation Using DRT and HPSG. In: Gelbukh, Alexander (ed.), Proceedings of the 3rd International Conference on Intelligent Text Processing and Computational Linguistics (CICLing), Mexico, February 2002 (Lecture Notes in Computer Science 2276). Berlin: Springer, 58⫺68.
1100
IX. Handling sign language data
Sagawa, Hirohiko/Takeuchi, Masaru 2002 A Teaching System of Japanese Sign Language Using Sign Language Recognition and Generation. In: ACM Multimedia 2002, 137⫺145. Sedgwick, Eric/Alkoby, Karen/Davidson, Mary Jo/Carter, Roymieco/Christopher, Juliet/Craft, Brock/Furst, Jacob/Hinkle, Damien/Konie, Brian/Lancaster, Glenn/Luecking, Steve/Morris, Ashley/McDonald, John/Tomuro, Noriko/Toro, Jorge/Wolfe, Rosalee 2001 Toward the Effective Animation of American Sign Language. In: Proceedings of the 9 th International Conference in Central Europe on Computer Graphics, Visualization and Interactive Digital Media. Plyen, Czech Republic, February 2001, 375⫺378. Sims, Ed 2000 Virtual Communicator Characters. In: ACM SIGGRAPH Computer Graphics Newsletter 34(2), 44. Sims, Ed/Silverglate, Dan 2002 Interactive 3D Characters for Web-based Learning and Accessibility. In: ACM SIGGRAPH, San Antonio. Sinclair, John 1996 Preliminary Recommendations on Corpus Typology. EAGLES Document EAG-TCWGCTYP/P. [Available at: http://www.ilc.cnr.it/EAGLES/corpustyp/corpustyp.html] Sleator, Daniel/Temperley, Davy 1991 Parsing English with a Link Grammar. In: Carnegie Mellon University Computer Science Technical Report CMU-CS-91⫺196. Speers, D’Armond L. 2001 Representation of American Sign Language for Machine Translation. PhD Dissertation, Georgetown University. [Available at: http://higbee.cots.net/holtej/dspeers-diss.pdf] Starner, Thad/Pentland, Alex 1995 Visual Recognition of American Sign Language Using Hidden Markov Models. In: International Workshop on Automatic Face and Gesture Recognition, 189⫺194. Stein, Daniel/Bungeroth, Jan/Ney, Herrmann 2006 Morpho-syntax Based Statistical Methods for Sign Language Translation. In: Proceedings of the 11th Annual Conference of the European Association for Machine Translation, 169⫺177. Stokoe, William/Casterline, Dorothy/Croneberg, Carl 1976 A Dictionary of American Sign Language on Linguistic Principles. Silver Spring, MD: Linstok Press. Suszczanska, Nina/Szmal, Przemysław/Francik, Jarosław 2002 Translating Polish Texts Into Sign Language in the TGT System, In: Proceedings of the 20 th IASTED International Multi-Conference Applied Informatics (AI 2002), Innsbruck, Austria, 282⫺287. Veale, Tony/Conway, Alan/Collins, Bróna 1998 The Challenges of Cross-Modal Translation: English-to-Sign-Language Translation in the Zardoz System. In: Machine Translation 13(1), 81⫺106. Viola, Paul/Jones, Michael 2004 Robust Real-time Face Detection. In: International Journal of Computer Vision 57(2), 137⫺154. Vogler, Christian/Metaxas, Dimitris 1998 ASL Recognition Based on a Coupling Between HMMs and 3D Motion. In: Proceedings of ICCV, 363⫺369. Wang, Yubo/Ai, Haizhou/Wu, Bo/Huang, Chang 2004 Real Time Facial Expression Recognition with Adaboost. In: Proceedings of the 17 th International Conference on Pattern Recognition (ICPR ’04), Vol. 3, 926⫺929. Wilcox, Sherman/Scheibman, Joanne/Wood, Doug/Cokely, Dennis/Stokoe, William C. 1994 Multimedia Dictionary of American Sign Language. In: ASSETS ’94: Proceedings of the First Annual ACM Conference on Assistive Technologies. New York, NY: ACM Press, 9⫺16.
44. Computer modelling
1101
Windridge, David/Bowden, Richard 2004 Induced Decision Fusion In Automatic Sign Language Interpretation: Using ICA to Isolate the Underlying Components of Sign. In: Roli, Fabio/Kittler, Josef/Windeatt, Terry (eds.), 5 th International Workshop on Multiple Classifier Systems (MCS04), Cagliari, Italy (Lecture Notes in Computer Science 3077). Berlin: Springer, 303⫺313. Wittenburg, Peter/Brugman, Hennie/Russel, Albert/Klassmann, Alex/Sloetjes, Han 2006 ELAN: A Professional Framework for Multimodality Research. In: Proceedings of the 5 th International Conference on Language Resources and Evaluation (LREC 2006), 1556⫺1559. Zhao, Liwei/Kipper, Karin/Schuler, William/Vogler, Christian/Badler, Norman/Palmer, Martha 2000 A Machine Translation System from English to American Sign Language. In: Proceedings of the 4 th Conference of the Association for Machine Translation in the Americas on Envisioning Machine Translation in the Information Future (Lecture Notes in Computer Science 1934). Berlin: Springer, 54⫺67. Zijl, Lynette van/Combrink, Andries 2006 The South African Sign Language Machine Translation Project: Issues on Nonmanual Sign Generation. In: Proceedings of SAICSIT06, October 2006, Somerset West, South Africa, 127⫺134.
Web resources BritishSignLanguage.com: http://web.ukonline.co.uk/p.mortlock/ CyberGlove: http://www.cyberglovesystems.com EAGLES project: http://www.ilc.cnr.it/EAGLES eSIGN project: http://www.visicast.cmp.uea.ac.uk/eSIGN/Public.htm Patterson glove: http://www.wired.com/gadgets/miscellaneous/news/2002/01/49716 Sign language corpora: American Sign Language (ASL): http://www.bu.edu/asllrp/cslgr/ Australian Sign Language (Auslan): http://www.auslan.org.au/about/corpus British Sign Language (BSL): http://www.bslcorpusproject.org/ German Sign Language(DGS): http://www.sign-lang.uni-hamburg.de/dgs-korpus/index.php/welcome.html Sign Language of the Netherlands (NGT): http://www.ru.nl/corpusngtuk/ Swedish Sign Language(SSL): http://www.ling.su.se/pub/jsp/polopoly.jsp?d=14252 Sign Smith Studio: http://www.vcom3d.com/signsmith.php SignSpeak project: http://signspeak.eu TESSA project: http://www.visicast.cmp.uea.ac.uk/Tessa.htm ViSiCAST project: http://www.visicast.cmp.uea.ac.uk/eSIGN/Public.htm
Eva Sáfár, Norwich (United Kingdom) John Glauert, Norwich (United Kingdom)
Indexes Index of subjects Note: in order to avoid a proliferation of page numbers following an index entry, chapters that address a specific topic were not included in the search for the respective entry; e.g. the chapter on acquisition was not included in the search for the term “acquisition”.
A Aboriginal sign languages also see the Index of sign languages, 517, 535 – 539, 543, 930 acquisition 40, 515, 557, 561, 566 – 567, 576, 580, 588 – 590, 735, 770 – 775, 777, 779 – 780, 844, 848, 863, 874 – 879, 880, 949, 950 – 951, 959, 963 – 967, 969, 1025, 1037 – bilingual see bilingual, bilingualism – classifiers 172 – 174, 594 – handshape 13 – iconicity 38, 405, 408, 584, 592 – 594, 705 – non-manuals 63, 324 – planning 891, 903 – 904 – pronoun 593 – word order 257 adequacy 502 – 506 adverbial 105, 187, 188 – 189, 192, 228, 269 – 270, 273, 323, 357 – 359, 379 – 380, 719, 775 – non-manual 64, 69, 95 – 96, 201, 504, 526, 671, 991 affix, affixation 32, 44, 81, 82 – 85, 90, 91 – 96, 102 – 105, 107, 128, 130 – 131, 146, 165, 168 – 169, 172, 176 – 177, 287, 322, 326, 332 – 333, 335, 519 – 520, 521, 579, 586, 827 – 828 agreeing verbs see verb, agreeing agreement 44 – 46, 91, 96, 165 – 168, 177, 229, 237, 250, 256, 266 – 267, 273, 285, 328, 348, 354, 371 – 372, 379, 382 – 383, 447 – 448, 453 – 458, 471, 521 – 522, 543 – 544, 564 – 565, 567, 569, 586 – 588, 633, 638, 642, 661 – 662, 718 – 719, 744, 771, 807, 853, 868, 873 – 874, 929, 1058 – acquisition 593 – 594, 666 – 669, 674 – auxiliary 146, 150 – 151, 229, 522, 538, 588 – double 139, 141, 145, 147, 206, 213, 217, 223 – non-manual 70, 268, 300, 376, 707, 1061
– number 119 – 120, 124 – 127, 129, 131 – 132, 279 – 280, 283 – 284, 336 alphabet, manual 101 – 102, 501, 524, 532, 544, 659, 764, 800 – 801, 848 – 849, 913 – 914, 916, 951, 991, 1000, 1007 alternate sign language see secondary sign language alternating movement see movement, alternating analogy 395, 830, 1009, 1011 annotation also see notation, 937, 990 – 991, 1033 – 1034, 1041, 1068, 1070, 1077, 1079 – 1080 anthropomorphism 1007, 1012 aperture change 6, 12, 16, 24 – 26, 29, 35, 36, 826 aphasia 717, 740 – 741, 743, 745 – 746, 747, 763 – 765, 767 – 768, 776, 780 apraxia 764 arbitrariness, arbitrary – form-meaning relation 22, 39, 79, 83, 392, 438, 441, 447, 532, 584 – 585, 594, 628, 659, 671, 717, 825, 875, 920, 936 – location 229, 413 – 414, 416 – 417, 456, 527, 543, 768, 823 articulation also see coarticulation, 4 – 7, 9 – 14, 15 – 17, 46, 106, 118, 178, 189, 222, 253, 271 – 272, 503 – 504, 526, 576 – 579, 594, 637, 651 – 653, 661, 697 – 698, 730 – 732, 742, 769, 777 – 779, 822 – 823, 830, 835, 1047 – non-manual 56 – 57, 63 – 65, 69 – 70, 326, 330, 344, 750, 847 articulatory suppression 694 – 696 aspect, aspectual marker/inflection 96, 106, 132, 139, 170, 206, 213, 216 – 217, 256 – 257, 280, 285, 301, 318, 320, 403, 586, 663, 718 – 719, 827, 867, 869, 870 – 872, 937, 1060 – completive 91, 96, 186, 191 – 193, 195, 200, 542, 820, 828 – 829, 867, 871 – conative 193
1104
Indexes – continuative, continuous 82, 91, 96, 193 – 194, 196, 719, 871 – 872 – durative, durational 82, 90 – 91, 96, 106, 194, 196 – habitual 91, 96, 105, 193, 195, 719, 871 – 872 – intensive 86 – iterative 82, 91, 96, 194 – 195, 451, 871 – lexical 191, 442 – 447, 448 – 451, 453, 458 – perfective 191 – 193, 196, 200, 322, 349, 542, 820, 828 – protractive 91, 96, 194 – situation 191, 193, 434, 442 – 448, 451, 453 – 454, 456 assessment 771, 773, 777, 961, 988 – 989 assimilation, phonological 14 – 15, 59, 128, 214, 231, 321, 503, 533, 537, 544, 578, 654, 789, 794, 809, 831, 1049 attention 164, 463 – 464, 473 – 474, 494 – 495, 576 – 577, 590 – 591, 612, 696, 747 – 748, 766, 789, 1062 attrition 842, 855 auditory perception see perception autism 592, 774 – 775 automated signing 1075 – 1076, 1083, 1094 – 1095 auxiliary also see agreement, auxiliary, 88, 187, 196 – 197, 229, 336, 818 avatar 1078, 1083 – 1085, 1088, 1093
B babbling 27 – 28, 589, 648 – 650, 653, 925 back-channeling 469, 505, 527, 804 backward reduplication see reduplication, backwards backwards verbs see verb, backwards beat see gesture, beat bilingual, bilingualism 495, 507, 560, 660, 676, 698, 747, 789, 841 – 845, 846 – 847, 849, 855, 982, 984 – 986 – education 897, 899, 903, 922, 957 – 963 bimodal bilingualism/bilinguals 635, 676, 789, 845 – 847, 950, 953 – 955, 963, 971, 986 birdsong 514 – 515 blend also see code, blending and error, blend – of (mental) space 142, 144 – 145, 147, 373 – 375, 390, 394, 405, 417, 425, 638, 1065 – of signs 99, 101, 171, 819, 825 – 826, 1000, 1013
body part 505, 562, 585 – 586, 934, 1012, 1049 body partitioning 375 – 376, 822, 1066 borrowing 97, 193, 221, 435 – 437, 537, 575, 612, 806, 825, 827, 966 – 967, 986, 1012 Broca’s area 516, 630, 740 – 743, 746, 748, 763, 765
C case marker/marking, case inflection/ assignment 84, 149 – 152, 234, 341, 350, 447, 457, 538, 817 – 818 categorization 160, 163, 176, 178, 434, 670, 1060 causative, causativity 105, 207, 211, 220 cerebellum 753 – 754, 769 change – aperture 25 – 26, 29, 35, 826 – demographic 560, 960, 972 – diachronic/historical 103, 277, 406 – 407, 792, 795, 803, 923, 1001 – 1002, 1036 – handshape 12, 37, 82, 234, 452, 525, 733, 769, 809 – language 39, 198, 215, 277, 388 – 389, 395, 406 – 407, 433, 505, 639, 789, 791, 795 – 796, 801 – 802, 816 – 819, 821, 826 – 827, 830, 834 – 836, 841 – 843, 855 – 866, 865 – 866, 879 – 881, 891, 924, 953, 1001, 1023, 1036 – phonological 99 – 100, 198, 791 – 792, 802, 821 – semantic 821 – sociolinguistic 953 – stem(-internal) 118, 128, 130 – 132, 873 channel see perception chereme, cheremic model also see phonology, 30, 38, 689, 1077 child-directed signing 590 – 591, 653, 655, 668, 672 classifier (construction) 32, 41 – 44, 95, 101, 119, 124 – 127, 131, 217, 234, 248 – 249, 256 – 257, 278, 285, 347 – 348, 360, 374, 392 – 393, 396 – 397, 401 – 403, 405 – 407, 415 – 418, 420 – 426, 448 – 449, 470, 499, 564 – 565, 567, 587, 594, 639, 659, 669 – 671, 674 – 675, 718, 745, 749, 765 – 767, 773, 776, 780, 821 – 823, 835, 929 – 931, 1003 – 1004, 1012 – 1013, 1026, 1030, 1060, 1084 – 1087 – body (part) 42 – 43, 161 – 162 – (whole) entity 42 – 43, 161 – 164, 166 – 169, 172 – 173, 177, 235 – 236, 418, 420 – 426, 449, 636, 639, 670, 675 – 676, 807, 1011
Index of subjects – handling/handle 41, 43, 161 – 164, 166 – 168, 172, 177 – 178, 257, 418, 420 – 424, 426, 594, 636, 639 – 640, 669 – 670, 727 – 728, 822 – instrument 160 – 161, 171, 399, 663 – numeral 125, 129, 131, 175, 178 – semantic also see classifier, entity, 160 – 161, 670 – size and shape specifier (SASS) 94, 96, 104, 160 – 162, 173, 398, 639, 669 – 670, 728 – verbal/predicate 160, 175, 176 – 180 classifier predicate/verb see verb, classifier clause 57, 246 – 247, 252, 255, 273, 294, 328, 330, 343, 454, 468, 471, 474, 480 – 482, 538, 615, 673, 808, 831 – 832, 872, 1063 – complement, complementation 188, 309, 340, 350 – 357, 376 – 377, 380, 534, 542 – conditional 61, 63, 65 – 66, 246, 295, 300, 671, 673 – embedded 278, 354 – 357, 366, 376 – 377, 381, 611 – interrogative see question – relative, relativization 56, 61, 63, 65, 238, 278 – 279, 295, 300, 308 – 309, 350, 357 – 361, 470, 476, 522, 542, 671, 1032 – subordinate 255, 340 – 341, 350 – 357, 575, 872 clause type/typing also see sentence, type, 56, 304 – 305 clitic, cliticization 59 – 60, 94, 96, 196, 219, 271, 274, 321, 333 – 334, 371, 480, 538, 580 coarticulation 9, 14 – 15, 302, 317, 325, 332, 847, 986, 1078, 1082 code – blending 842, 845 – 847, 986 – mixing 676, 842, 844 – 847, 848, 852, 965 – 966, 970 – switching 842, 844 – 847, 851 – 852, 856, 966, 969 – 970, 986, 1035 codification 896 – 898, 905, 1025 cognition, cognitive 83, 210, 220 – 221, 251, 259, 516, 590, 630 – 632, 638, 641, 712, 763, 770 – 775, 777, 779 – 781, 878, 922, 937, 989 – deficit/impairment 741, 745, 768, 770, 772, 777 – development 771, 952, 957, 967 – visual-spatial 83, 772 – 774, 873 cognitive linguistics 251, 255, 374 coherence 58, 422, 499 – 500 cohesion 417 – 418, 429, 499 – 500, 766 collective plural see plural, collective color term/sign 102, 433, 436 – 439, 441, 562, 591 – 592, 790, 797 – 798, 800
1105 communicative interaction see interaction, communicative community see deaf community complement also see clause, complement, 188, 252, 273, 309, 376, 533, 1087 – 1088, 1091 – 1092 complement clause see clause, complement complementizer 252, 297 – 299, 304, 341 – 342, 350 – 351, 360, 465 completive see aspect, completive completive focus see focus, completive complexity – grammatical/structural 146, 514, 517 – 519, 543, 552, 567, 740, 774, 826, 853, 1007, 1060 – morphological 7, 33, 81, 159, 161, 163, 165 – 166, 169 – 170, 415, 433, 467, 519, 533, 593, 670, 710, 817 – phonetic/articulatory 651, 778 – phonological 41 – 42, 81, 659 complex movement see movement, complex complex sentence see sentence, complex compound, compounding 29, 35, 59 – 61, 81 – 82, 96 – 104, 107, 171 – 172, 179, 277, 322, 407, 433, 437, 440, 443, 530, 532 – 533, 537, 542 – 544, 575, 793, 796, 818 – 819, 824 – 826, 848, 851, 869 comprehension 173 – 174, 406, 416, 469, 516, 667 – 668, 687 – 688, 699, 703, 705, 707, 730, 740 – 741, 743 – 745, 748 – 749, 752 – 753, 765 – 768, 773 – 779, 967 – 968, 991 computer corpus see corpus, computer conditional see clause, conditional conceptual blending 396, 417 conjunction 340 – 344, 349 – 350, 809, 828 – 829 constituent – order also see syntax, word order, 248 – 249, 251, 254 – 256, 286, 520, 533, 807, 1030 – 1031, 1038 – prosodic 56 – 61, 62 – 64, 67 – 70, 341 – syntactic 57 – 58, 62, 69, 246, 248, 252, 258, 294, 303 – 305, 325, 330 – 331, 340 – 342, 344, 353, 356, 358 – 359, 464, 466 – 468, 471 – 474, 478 – 479, 611 constructed action/dialogue also see point of view and role shift, 162, 230, 499, 637, 674, 991 contact see language contact and eye contact content question see question, content contrastive focus see focus, contrastive
1106
Indexes conventional, conventionalization 78, 80 – 81, 170, 316, 390, 393 – 396, 398, 405 – 406, 433, 448, 455 – 456, 543, 584 – 585, 627 – 628, 634, 637, 651, 689, 705, 803, 819, 851, 869, 1026 – language model 602, 610, 863, 875 – 877, 879 – 880 – (sign) language 588, 602 – 604, 607 – 609, 614, 620, 651, 914 coordination 63, 341 – 350, 354, 359, 575 corpus – computer 1077 – 1081 – linguistics 937 – 938, 1033 – 1034 – planning 891, 896 – 898, 904 – sign language 259, 798, 800, 802, 848, 902, 937, 1024, 1034 – 1036, 1080 co-speech gesture see gesture, co-speech creole, creolization 40, 85, 219, 317, 561, 566 – 567, 577, 586, 842 – 844, 852, 935 – 936 cross-linguistic 84, 87 – 88, 145 – 146, 151, 180, 195, 209 – 216, 222 – 223, 250, 253 – 254, 256, 259, 305, 347, 357, 426, 433, 436, 613, 656, 796, 826, 871, 876, 929, 937 cross-modal/modality 47, 87, 128, 137, 153, 215, 218, 521, 746, 774, 842 – 843, 950, 962, 966 – 967, 970 – 971, 986 culture see deaf culture
D deaf activism/movement 950, 953, 954, 956 deafblind, deafblindness also see tactile sign language, 499, 523 – 525, 527, 576, 808 deaf community 40, 439, 494, 502, 504, 554 – 555, 559 – 560, 564 – 569, 604, 798 – 799, 803, 806 – 807, 810, 842 – 844, 852, 854 – 855, 866 – 868, 892 – 894, 897 – 899, 905, 910 – 911, 914 – 919, 920, 922, 926, 935, 937, 950, 952 – 955, 957, 971, 981 – 982, 984 – 987, 1000, 1002 – 1007, 1009, 1037, 1094 – 1095 deaf culture 439, 501, 505 – 506, 528, 892, 918, 953 – 954, 961, 1006 – 1007, 1038 deaf education see education, deaf deaf identity 565, 892, 954, 985, 1002, 1004, 1009 deaf school see education, school for the deaf definite, definiteness 80, 236, 252, 267, 269 – 274, 280, 283, 360, 463, 471 deictic, deixis also see gesture, deictic, 228, 274, 403, 527, 587, 593, 667, 1061
dementia 769 – 770 demonstrative 112, 175, 228, 238 – 239, 270 – 271, 273 – 274, 277, 284, 286, 309, 358, 360 – 361, 475, 533, 1062 derivation – morphological 81, 83, 89, 91 – 92, 103 – 107, 170, 322, 335, 407, 533, 538, 575, 718, 826, 864, 924 – syntactic 252, 305, 332, 342, 348, 381 determiner 96, 119, 129, 175, 178, 209, 228, 267, 269 – 275, 279 – 280, 283, 287, 301, 323, 358, 360 – 361 development see acquisition and cognition, development diglossia 843, 856 directional verb see verb, directional dislocation – left 295, 471 – 472, 481 – right 252, 275 – 276, 480 – 481 discourse 144 – 145, 166, 177, 229, 251, 271, 309, 366, 371 – 373, 375, 377 – 379, 381 – 382, 413 – 414, 417 – 418, 424, 426, 455, 463 – 464, 467 – 470, 493, 497 – 501, 527, 664, 674 – 675, 705, 745, 747, 766, 769, 773, 795, 808 – 809, 989, 1061 – 1063, 1066 – 1067 – marker 342, 500, 502, 641, 809, 820, 1061 – 1062 disjunction 62, 343 – 344, 349 distalization of movement see movement, distalization distributive plural see plural, distributive double, doubling 106, 118, 121, 257, 297 – 299, 307, 317, 329 – 330, 474, 478, 482 – 483, 526, 664 – 666, 790, 870 double agreement see agreement, double
E education – deaf 554 – 555, 560, 566, 568, 803, 805, 854, 868, 891 – 894, 903, 911 – 914, 916, 918 – 920, 934, 936, 981, 1002 – mainstreaming, mainstream school 876, 956, 958 – 960, 962 – 963 – oral also see oralism, 604, 619, 866 – 867, 892 – 893, 909 – 911, 913, 915 – 916, 918 – 920, 922, 925, 950, 952 – 953, 955 – 956, 962 – 963, 972, 985, 1005, 1032, 1037 – deaf school, school for the deaf 40, 506, 541, 566, 568, 799, 803, 853, 899, 901 – 902,
Index of subjects 910 – 911, 914, 919, 950 – 952, 956, 960, 983, 986, 1011, 1037 EEG 712, 734, 777 elicitation 172, 253 – 254, 256, 258, 670, 775, 792, 1024, 1026 – 1031, 1038 – 1039, 1041, 1079 ellipsis 213 – 214, 277, 336, 342, 346 – 347 embedding see clause, embedded emblems see gesture, emblems emergence 40, 150, 234, 513, 545, 594, 641, 743, 805, 817 – 818, 834, 910 – 911, 934 – 935, 950 emphatic, emphasis 55, 114, 207, 217, 234 – 235, 293, 297, 319, 321, 326 – 327, 329 – 330, 334, 403, 474, 482 – 483, 792, 847 entity classifier see classifier, (whole) entity ERP see EEG error 173, 583, 704, 712 – 713, 716, 719, 721 – 722, 724 – 733, 773, 775, 855 – in acquisition 589 – 590, 592 – 594, 651 – 659, 662, 667 – 670, 672, 675 – anticipation 722, 724, 726, 728, 733, 809 – aphasic 741 – 743, 747, 766, 768 – 770, 780 – blend 716, 719 – 720, 724 – 725, 728 – 729 – fusion 719, 724 – 725, 728 – 729 – morphological 727, 729 – perseveration 300, 326, 330, 724, 726 – 728, 809 – phonological 592, 651, 712, 721 – 722, 728 – 729, 770, 775 – phrasal 728 – 729 – substitution 590, 650, 652, 656 – 658, 716, 719 – 721, 724 – 725, 741 – 742, 768 – syntagmatic 726 event 84, 87, 96, 166, 188, 191, 343, 370, 375 – 376, 392, 395 – 396, 418 – 426, 442 – 447, 450 – 453, 456 – 457, 612, 635 – 636, 744, 822 – schema 222, 835 – structure 442 – 445, 450 – 452, 454 event visibility hypothesis 39, 444 evolution 38, 205, 207, 221, 514 – 517, 552, 565 – 567, 735, 817, 820, 823, 835, 847, 919, 980 exclusive see pronoun, exclusive exhaustive, exhaustivity 91, 125, 140, 143, 465 – 467, 474, 483 eye contact, visual contact 294, 361, 494, 505, 523, 674 eye gaze, gaze 6 – 7, 45, 70, 139, 216, 231, 268, 273, 275, 293, 341, 356, 368, 370, 373 – 374, 377, 397, 470, 495 – 496, 499,
1107 501 – 502, 527, 577, 666, 668, 674, 750, 1003, 1011, 1018, 1061 – 1063 eye tracking 7, 70, 139, 231, 268
F facial expression/articulation/signal also see gesture, facial, 5, 12, 56, 61 – 67, 70 – 71, 94 – 96, 106, 268, 272, 298, 310, 324, 327, 341, 368, 372 – 374, 381, 397, 425, 500, 503, 526, 534, 579, 583, 640, 651, 689, 707, 726, 728, 739, 748, 750, 765 – 766, 769, 775, 781, 827, 833, 851, 1003, 1008, 1011, 1061, 1066, 1078, 1081 – 1083 feature – agreement 206, 218, 266, 268, 273, 328, 587 – grammatical 119, 206, 356, 358, 713, 868 – inherent 24, 26 – 27 – linguistic 540, 543 – 544, 557, 561, 634, 641, 842, 986, 1035 – non-manual 94 – 95, 106, 190, 218 – 219, 239, 247, 259 – 260, 293 – 294, 330, 520, 666, 673, 707, 809, 843, 937, 1003, 1094 – number 119, 138, 140 – 141, 143, 146, 151 – 153, 279 – 280 – person see person – phi 44, 141, 266 – 268, 273 – 275, 278, 283, 440, 713 – phonological 23 – 24, 26 – 27, 29 – 31, 35, 37, 43, 45, 80, 82, 91, 97, 102, 104, 106, 114 – 116, 121, 128, 132, 138 – 139, 144 – 145, 151, 168, 171, 178, 237, 336, 438, 658, 717 – 718, 720, 728 – 729, 790, 799, 826 – plural see plural – prosodic 25 – 26, 28, 37, 83, 467 – referential 266 – 267, 270, 274 – 276, 280 – semantic 87, 214, 219, 360 – syntactic (e.g. wh, focus) 298, 300, 310, 329 – 330, 358, 716, 990 feature geometry 25, 30 – 31 feedback 498, 527 – auditory 583 – in language production 713, 715, 730 – 731 – proprioceptive 583 – visual 17, 583, 650, 659, 732, 755, 779 figurative, figurative language 105, 999, 1008 – 1001 fingerspelling also see alphabet, manual, 15, 28 – 29, 102, 453, 499, 501, 518, 527, 533 – 534, 717, 763 – 764, 776, 778, 800 – 801,
1108
Indexes 804, 826 – 827, 847 – 849, 959, 969 – 971, 986, 991, 1082, 1093 fMRI 712, 734, 750 – 752 focus 68, 114, 119, 163, 175, 246, 256, 268, 282, 295, 297 – 298, 300, 306, 310, 329, 416, 462 – 468, 471 – 473, 478 – 483, 663 – 666, 870 – completive 467, 474 – 476, 479, 484 – contrastive 68, 351, 418, 464 – 467, 470, 472 – 473, 475 – 479, 483 – 484, 665 – 666 – emphatic 330, 482, 665 – 666 – information 474, 476, 482, 665 – 666 – marker/particle 268, 369, 467, 473, 475 – narrow 466 folklore 501, 1000, 1004, 1007, 1014, 1017 function words, functional element 84, 88, 94 – 95, 166 – 167, 191 – 192, 210, 214, 217, 219, 223, 269, 327, 579, 844, 850, 957 fusion also see error, fusion, 103, 230, 306, 423, 519 – 520, 537, 729 future (tense) 188 – 191, 222, 270, 320, 611 – 612, 820, 829 – 831
G gapping also see ellipsis, 277, 341, 346 – 349, 361 gating 107, 700, 717 – 719 gaze see eye gaze generative 38, 252 – 253, 328, 350, 664, 732, 876 gestural source/basis/origin 123, 198, 200 – 201, 221, 224, 638, 820, 823 – 824, 827 – 829, 832 – 833, 836 gestural theory of language origin 514 – 516 gesture 5, 39 – 41, 70, 142 – 145, 198, 220 – 221, 251, 366, 368 – 369, 373 – 376, 381 – 382, 393 – 396, 398, 405 – 406, 419, 500, 505, 514, 516, 542, 556, 593 – 594, 602 – 607, 611 – 612, 614 – 619, 629, 631, 636, 639 – 642, 649 – 651, 661, 668, 752 – 753, 766 – 767, 823 – 824, 827 – 831, 833, 851, 853, 871, 875 – 876, 878 – 879, 912 – 914, 1001, 1048, 1058 – 1059 – beat 5, 629 – deictic also see gesture, pointing, 143, 231, 267, 851 – emblems 374, 393, 533, 628, 634 – 635, 637, 851 – facial 516, 827, 831 – 833, 1052
– non-manual 221, 268, 324 – 327, 639 – 640, 672, 831 – 833, 851, 1052 – pointing also see gesture, deictic, 70, 141 – 142, 198, 208 – 209, 217, 227 – 234,267, 269, 274, 277, 373, 414, 418, 424, 505, 530, 584, 588, 592 – 594, 604 – 605, 607, 611 – 614, 627, 629 – 629, 637, 658, 667, 771 – 773, 809, 832, 835, 851, 969, 1062 – 1063 – representational 628 – 630, 634 – 642 goal see thematic role, goal grammatical category also see part of speech and word class, 91, 112, 171 – 172, 186 – 187, 196, 200, 220, 231, 342, 434, 613, 699, 790, 818 – 819, 827, 834 – 836, 929 grammaticalization, grammaticization 103, 146, 170, 187, 192, 198, 200, 204 – 205, 207 – 211, 215 – 216, 219 – 224, 337, 360, 500, 634, 639 – 641, 671, 719, 789
H haiku 998, 1005, 1007 – 1008, 1010, 1012 – 1013, 1017 handling classifier see classifier, handling handshape 6 – 8, 12 – 16, 22 – 25, 27, 29 – 31, 33, 35 – 37, 39, 79 – 80, 82, 99, 102, 107, 122 – 124, 137, 146, 158, 160, 168, 170 – 171, 173, 178, 215 – 217, 221 – 222, 230 – 236, 239, 254, 318, 321 – 322, 335 – 336, 390 – 396, 401 – 402, 415, 420, 437 – 438, 444, 448 – 449, 451 – 453, 455 – 457, 492, 501, 503 – 505, 519, 525, 532, 561 – 562, 567, 575, 579 – 580, 586, 606 – 607, 615 – 616, 629, 635, 638, 640, 649, 654 – 656, 658, 688 – 690, 697, 700 – 704, 706 – 707, 712, 717 – 718, 720 – 722, 727 – 729, 733, 742 – 744, 768 – 769, 772, 776, 788, 790 – 791, 794 – 795, 799, 804, 807, 809, 821 – 824, 826, 831, 835, 849, 878, 913, 931, 1000 – 1001, 1007 – 1008, 1010 – 1011, 1013 – 1014, 1016 – 1018, 1026, 1047, 1049 – 1052, 1055, 1060, 1066, 1077, 1082 – 1087, 1091 handshape, classifier 41 – 43, 101, 119, 125 – 126, 174, 360, 397 – 398, 403, 406, 564 – 565, 569, 669, 671, 807, 821, 823, 835, 1026, 1084 – 1085 headshake also see negative head movement, 70, 316, 318, 325 – 327, 330 – 332, 342, 349, 355 – 357, 492, 521, 526, 611 – 612, 641, 671 – 672, 733, 831 – 832, 851
Index of subjects hearing signer 5, 439, 507, 517 – 518, 528, 536, 540 – 541, 552, 554, 556, 559 – 560, 565 – 569, 578, 676, 698, 746, 748, 741, 749 – 750, 752, 754, 774, 776 – 777, 899, 986, 1005 hemisphere 577, 739 – 742, 745 – 753, 755, 763 – 768, 780 – 781, 876 – left 739 – 742, 745 – 749, 755, 763 – 765, 767 – 768, 780 – 781 – right 577, 739 – 740, 745, 748 – 750, 752 – 753, 755, 765 – 767, 780 – 781 historical, historical relation 34, 40, 150, 198, 205, 209 – 210, 215 – 216, 220, 245, 251, 259, 350, 399, 405 – 406, 438 – 439, 456, 540, 566, 586, 743, 791 – 792, 800, 805, 827, 830, 854 – 855, 864 – 867, 891, 896, 1001, 1047, 1076 hold-movement model 30, 38, 733 homesign 40 – 41, 407, 517, 543, 545, 565, 577, 594, 651, 863, 867 – 868, 875 – 877, 879 – 880, 910 – 911, 913 – 914, 918, 925, 1028 homonym, homophone 128, 533 – 534, 537, 1012 HPSG 141, 147, 1085 – 1089, 1094 human action 580, 749, 752 – 753 humor 502, 505 – 506, 1000, 1003, 1048 hunter, hunting also see evolution, 100, 517, 528, 535 – 536, 540, 545
I icon, iconic, iconicity 21 – 22, 38 – 42, 44, 46, 78 – 79, 85, 88 – 90, 102, 105, 107, 150, 164, 170, 173 – 174, 194, 248, 250 – 251, 260, 269 – 270, 414, 417, 419, 421, 426, 433 – 435, 441, 444, 458 – 459, 503, 517, 530, 536, 542, 545, 562, 575 – 576, 584 – 588, 592 – 594, 604 – 605, 611, 614 – 615, 628, 632, 636 – 637, 639 – 641, 647 – 648, 650 – 651, 655, 659, 667 – 671, 673 – 674, 688 – 689, 705, 717 – 719, 743 – 744, 774, 822, 824, 828, 835 – 836, 851, 853, 873, 918, 920 – 921, 924, 934, 936 – 937, 990 – 992, 1051, 1053, 1059, 1085 imperative 292 – 293, 311, 324, 478, 561, 1061 inclusive see pronoun, inclusive incorporation also see numeral incorporation 101 – 102, 112 – 113, 121 – 123, 171, 232 – 235, 256, 271, 284 – 285, 320, 519, 847 indefinite, indefiniteness 227 – 228, 234 – 239, 269 – 274, 276, 287
1109 index, indexical 38, 60, 88, 94, 140 – 141, 144 – 145, 188, 192 – 193, 196 – 197, 205, 207 – 222, 229, 233, 236, 267, 269 – 270, 273, 275, 280, 294, 304 – 305, 352, 360 – 361, 365 – 366, 377 – 383, 435, 471 – 472, 482, 494, 503, 526, 534, 538, 584, 587, 593, 663, 766, 794, 809, 873, 1030, 1057, 1092 index finger 13, 121, 123 – 124, 137, 163, 197, 232, 269, 272, 318, 390 – 391, 395, 398 – 299, 401 – 402, 415, 420, 506, 533 – 534, 593, 612, 618, 628, 634, 658, 700, 742, 764, 773 indicating verb see verb, indicating indirect report 365 – 366, 371, 380, 493 inflection 13, 77, 80, 81, 83 – 86, 88, 90 – 91, 95 – 96, 104 – 107, 113, 119 – 120, 126, 128 – 132, 139, 145, 166, 172, 186 – 188, 190 – 191, 193, 200 – 201, 205 – 206, 210 – 213, 215 – 217, 219 – 220, 250, 256 – 257, 270 – 271, 274, 279, 283 – 285, 287, 320, 328, 336, 403 – 404, 406, 542, 564, 587, 609 – 610, 613, 662, 667 – 668, 670, 713, 716, 718 – 719, 771, 828, 847 – 848, 864, 871, 873 informant, informant selection 530, 990, 1023 – 1034, 1036 – 1042 information structure 56, 64, 246, 520, 664, 870 inherent feature see feature, inherent initialize, initialization 101 – 102, 438, 444, 449, 453, 586, 847, 849, 969 interaction, communicative 5, 40, 55, 68, 457, 468, 524 – 525, 528, 544, 565, 628, 790, 804, 810, 823, 832 – 833, 843, 845, 853 – 854, 868, 893, 910, 936, 961 – 962, 965, 970 – 971, 980 – 982, 984, 986, 989 – 990, 1027, 1030, 1034, 1054, 1059, 1076 interface 132, 143 – 145, 310, 341, 630, 688, 711, 713, 715, 732, 734, 780, 1077, 1080 internal feedback see feedback, internal International Sign see the Index of sign languages interpret, interpreter, interpreting 498 – 499, 525, 527, 589, 703, 847, 853 – 854, 895, 902 – 904, 953, 955, 958 – 959, 962 – 963, 1038, 1078, 1095 – language brokering 980 – 985 interrogative see question interrogative non-manual marking see question, non-manual marking intonation also see prosody, 55 – 71, 295 – 296, 310, 326, 341, 481, 502, 1048, 1061 – 1062, 1070
1110
Indexes introspection 991, 1023 – 1024, 1026, 1033 – 1034 IS see the Index of sign languages iterative, iteration 59, 67, 82, 91, 96, 105 – 106, 193 – 195, 453, 871
J joint, articulatory 9 – 13, 15 – 16, 24, 26, 28, 45 – 46, 190, 578, 580 – 581, 590, 652 – 653, 656, 1081
K kinship, kinship terms 80, 102, 276, 432 – 433, 436, 438 – 441, 458, 667, 929, 1027
L language acquisition see acquisition language brokering see interpretation, language brokering language change see change, language language choice 506 – 507, 796, 807, 845, 894, 953, 958 – 959, 964, 968, 970 – 972, 989, 991, 1079 language contact 215, 518, 528, 540, 557 – 558, 560 – 561, 789, 801, 806, 863, 868 – 869, 911, 934, 936, 949, 953, 963, 965, 968 – 971, 980. 986, 990, 1035 language development see acquisition language evolution see evolution language family 80, 107, 148, 221, 233, 933 – 934, 936 language planning 713, 715, 951, 953, 955, 957, 961 – 962, 971 language policy 889 – 890, 894, 920, 949 – 950, 952, 954 – 955, 957, 960 – 961 language politics 889 – 890, 895 language processing see processing language production see production lateralization see hemisphere left hemisphere see hemisphere, left leftwards movement see movement, leftwards legal recognition 889 – 890, 891 – 896, 899, 903 – 904, 926, 950, 953 – 955, 1095 lexeme 22 – 24, 26, 29 – 31, 80 – 81, 85, 107, 433 – 434, 459, 638, 640, 716 – 717, 719,
818 – 819, 821, 823, 825 – 826, 835 – 836, 1023, 1026 lexical access 687 – 688, 690, 699, 701 – 704, 706, 713 – 714, 716, 721, 724, 747 lexical development see acquisition, lexical lexicalization, lexicalized 29, 59, 81, 98 – 99, 107, 122, 143, 146, 151, 170, 172, 190, 198, 209, 221, 327, 336, 397 – 398, 402, 505, 610, 640 – 641, 671, 706, 719, 789, 851 lexical modernization 898 – 891, 896, 903, 905 lexical negation see negation, lexical lexical variation see variation, lexical lexicography 798, 895, 898, 1023, 1030, 1075 – 1076 lexicon 7, 9, 11 – 14, 16, 38 – 39, 69, 78 – 81, 84 – 86, 88, 97, 140, 142 – 145, 147, 152, 170, 172, 198, 326, 401, 406, 426, 432, 434 – 435, 442, 515, 518, 530, 532 – 533, 536, 541, 543, 545, 556, 575, 585, 602 – 605, 632, 648, 655, 659, 688, 696, 703, 705, 711, 713, 716 – 719, 721, 724, 735, 774, 777, 789, 797, 800, 803, 806, 817 – 821, 823 – 826, 836, 847 – 849, 853 – 854, 864, 875, 889, 896 – 903, 905, 927, 930, 957, 991, 1010, 1012, 1017, 1038, 1049, 1055, 1076 – 1077, 1082, 1085 – 1086, 1088 – 1089, 1093 – frozen 101, 169 – 172, 179, 216, 269 – 270, 398, 587, 718 – 719, 836 – productive 38, 40, 81, 100, 164, 170 – 172, 180, 403, 459, 688 – 689, 705, 718, 819, 822 – 823, 825, 835, 1011 – 1013, 1015, 1059 linguistic minority 789, 806, 841 – 842, 892 – 895, 911, 938, 949 – 950, 953 – 954, 956 – 957, 960 – 961, 967, 971, 980, 984, 986, 1034, 1039, 1095 little finger 13, 15, 123 – 124, 322, 391, 440, 792 location also see place of articulation, 4, 6 – 16, 24, 42, 44, 60, 78 – 80, 82, 86, 91, 95, 99 – 102, 105, 107, 117 – 119, 121 – 122, 124 – 125, 130, 141, 143, 148, 151, 160 – 161, 164 – 166, 168 – 171, 173 – 174, 177 – 180, 194, 213, 219, 228 – 234, 238, 255, 266 – 267, 280, 320, 358, 396, 401 – 403, 406 – 407, 412 – 424, 426 – 427, 435, 438, 448, 454 – 456, 459, 465, 470, 495, 499, 503, 519, 525, 527, 537, 543, 563 – 565, 569, 578, 584, 586 – 588, 593 – 594, 637 – 639, 649 – 650, 652, 654 – 656, 659, 661 – 662, 666 – 668, 670, 687 – 690, 692, 697, 700 – 704, 706, 728, 739, 742, 768 – 769, 773, 775 – 776, 781, 788, 790 – 791, 794 – 796, 799,
Index of subjects 804, 821 – 822, 831, 874, 924, 991, 1001, 1011 – 1014, 1016, 1018, 1026, 1029 – 1030, 1049, 1051 – 1053, 1059 – 1060, 1062, 1077, 1082 – 1085, 1087, 1091 – 1092
M machine-readable 1033 – 1034, 1050, 1067, 1076, 1079, 1085, 1086 machine translation 751, 1075 – 1076, 1078, 1085, 1093 – 1095 mainstreaming see education, mainstreaming manual alphabet see alphabet, manual manual code 517, 545, 911 manual communication system 499, 915, 956 manual dominant see negation, manual dominant manual negation see negation, manual memory 405, 415, 463, 469, 668, 698, 705, 739, 753, 781, 878, 879, 1078 – short-term 690, 693 – 694, 698 – 699, 753 – span 694, 698 – 699 – working 687 – 688, 693 – 694, 696, 699, 704, 1031 mental lexicon 432, 434, 436, 703, 711, 713, 716 – 719, 721, 724, 735, 821 mental image, mental representation 142, 147, 373, 390, 394, 396, 405, 406, 638, 687 – 688, 690, 693, 699, 779, 835 – 836 mental space 142, 144 – 145, 373, 395, 412, 416 – 417, 835, 1065 metadata 1035 – 1036, 1042, 1070, 1080 metalinguistic 958, 968, 970 metaphor 38, 105, 179, 189 – 190, 217, 221, 433 – 435, 437 – 438, 441 – 442, 454, 458, 532, 648, 717, 800, 820, 825, 854, 991, 998, 1000, 1003 – 1004, 1007 – 1010, 1014, 1018 middle finger 13, 121, 123 – 124, 390, 420, 440, 524, 537, 576, 634, 700, 764, 1049 Milan Congress 866, 920, 952 – 953 minority see linguistic minority mirror neurons 516, 735 modality – communication channel 4 – 7, 17, 21 – 22, 31 – 34, 36 – 39, 46, 68 – 70, 77 – 78, 80, 82 – 83, 85 – 88, 90, 95 – 97, 101, 105, 112 – 113, 118, 122, 127 – 128, 131 – 132, 137 – 138, 150, 153, 177, 188, 205, 210, 216, 219, 221 – 222, 238, 240, 245 – 246, 248, 250, 252 – 254, 257, 259, 265 – 267, 293, 302, 316, 337, 340, 348, 352, 354, 361, 368, 395, 398,
1111 400, 404 – 405, 412 – 414, 418, 426 – 427, 442, 490, 494, 499, 502, 513, 520, 522, 527, 564, 569, 604, 607, 616, 618, 620, 626 – 627, 632 – 633, 636 – 642, 647 – 650, 676 – 677, 687, 705, 707, 711 – 713, 715 – 716, 719, 730, 732 – 734, 744, 746 – 747, 754 – 755, 762 – 764, 767 – 770, 772, 774, 776 – 780, 789, 806, 817, 836, 841, 843, 847, 851, 854, 856, 863 – 864, 869, 880 – 881, 910, 924, 936 – 937, 950, 961, 967 – 968, 986, 1023, 1059 – 1060, 1069 – 1070, 1083, 1088 – grammatical category 94, 187 – 188, 196 – 200, 269 – 297, 301, 306, 320, 323, 329, 332, 336, 478 – 479, 482, 483, 494, 502, 513, 820, 833, 929, 1060 – speaker attitude 11, 369, 371 – 372, 417 modal verb see verb, modal modulation 55, 71, 86 – 87, 90 – 91, 95 – 96, 106 – 107, 186 – 187, 189, 191, 193 – 195, 413, 522, 587, 662, 1061 monitoring 583, 711, 713, 715, 730 – 732, 735, 747, 779, 968 morpheme 7, 13, 32 – 33, 45, 78 – 79, 91 – 92, 101, 103, 105, 117 – 120, 128, 132, 142 – 146, 149 – 152, 158, 163, 165 – 166, 168, 171, 175 – 176, 178, 186 – 187, 193 – 195, 200, 223, 230, 249, 306, 321 – 322, 340, 348 – 349, 354, 358, 361, 392, 405, 424, 433, 442 – 443, 452 – 453, 491, 518 – 520, 526, 575, 594, 615, 670, 706, 713, 718, 727 – 730, 774, 816, 819 – 821, 827, 831, 848, 867, 877, 986, 1046, 1056 – 1060, 1084, 1088 morphological operation 77, 81 – 82, 87, 91, 112, 115 – 116, 128, 131 – 132, 143, 170 – 171, 234, 520 morphological realization 32, 84, 113, 115, 138, 144, 146, 175, 872 morphology 13, 30 – 33, 38 – 40, 42 – 43, 45, 247, 256 – 257, 266 – 267, 278 – 279, 281 – 284, 287, 296, 306, 309, 316 – 317, 321 – 322, 335 – 337, 341, 360, 380, 389, 392, 403, 405, 407, 415, 447 – 448, 453, 455, 457, 517, 519 – 521, 526, 533, 537, 539, 564, 574 – 576, 579, 586, 593, 595, 602 – 603, 606 – 607, 613, 616, 633, 635, 647 – 648, 667, 669 – 670, 711, 715 – 716, 718 – 719, 721, 727 – 732, 734 – 735, 754, 770 – 772, 774, 777, 807, 809, 817, 819, 824, 835, 845, 849, 852 – 853, 864, 868 – 869, 873 – 874, 877 – 878, 924, 928 – 930, 937 – 938, 986, 1014, 1023, 1026, 1035, 1045 – 1047, 1049, 1052 – 1055, 1058, 1060, 1062, 1067, 1069, 1077 – 1078, 1082
1112
Indexes – sequential 81 – 83, 85, 89, 91, 92, 95 – 97, 102 – 103, 107, 128, 131, 321, 322, 335 – 336, 873 – simultaneous 23, 30 – 33, 59, 77, 81 – 83, 86, 91, 96 – 97, 101 – 107, 168, 171, 195, 249, 254, 257, 321 – 322, 335, 873 morphophonology 33, 38, 744 morphosyntax 58 – 59, 65, 84, 112, 114, 130 – 131, 143, 205 – 206, 211, 256 – 257, 340 – 341, 349 – 350, 361, 413, 418, 443, 445, 565, 569, 663, 718, 770, 774, 780774, 780, 807, 874, 923, 968, 1023, 1031, 1047 motion capture 1081 – 1084, 1093 mouth 7, 69, 361, 451, 562, 639, 641, 651, 656, 748, 751, 846, 849, 1012, 1052, 1055, 1067, 1069, 1081, 1087 mouth gesture 327, 525, 728, 751, 849 – 850, 991 mouthing 69, 94 – 96, 114, 211, 214 – 215, 218 – 219, 319, 327, 358, 437, 440, 525, 530 – 531, 539, 544, 562, 751, 789, 800 – 801, 806, 841, 847, 849 – 850, 873, 898, 969 – 971, 986, 991, 1086 movement – alternating 105 – 106, 118 – 119, 121, 437, 722 – complex 114 – 119, 516, 753, 764, 689, 769 – formal operations 252, 257, 296 – 309, 328 – 329, 333, 344 – 346, 349, 353, 358, 459, 466, 471, 478 – 479, 499, 664, 665, 677 – iconic 79 – 80, 90, 105, 171, 186, 189, 198, 270, 336, 390, 394, 396 – 399, 401 – 402, 437 – 438, 449, 500, 530, 532, 537, 628, 648, 668, 822, 824, 1000, 1010 – 1011 – in child language acquisition 576, 578, 580, 589 – 591, 649 – 659, 668 – 670 – in classifier constructions 42 – 43, 82, 125, 160, 162, 165, 166, 168, 172 – 173, 398, 406 – 407, 415, 420, 422, 424, 448, 455 – 456, 458 – 459, 584, 638 – 639, 706, 718, 765, 823, 1088, 1092 – in compounds and word formation 98 – 103, 105, 826, 849 – in discourse and poetry 500, 503, 808 – 809, 1014 – 1016, 1018, 1063 – in verb agreement 13, 44 – 45, 82, 137 – 139, 145, 149, 205 – 206, 208, 210 – 213, 215, 217 – 218, 221 – 222, 280, 453, 456, 499, 521, 537, 543, 593, 749, 772 – local see aperture change – morphological 82 – 83, 89, 91, 96, 105, 107, 114 – 119, 121 – 122, 128, 130, 140, 186,
189 – 191, 193 – 196, 199 – 200, 232, 235, 271, 276, 281, 285, 322, 404, 406, 444, 453, 455 – 456, 519 – 520, 586, 717 – 719, 831, 1059, 1089, 1092 – non-manual 69, 121, 195 – 196, 317, 325, 327, 396, 452, 472, 500, 520, 582, 583, 640, 666, 750, 752 – path 12, 16, 26 – 28, 37, 44 – 45, 106, 118 – 119, 121, 128, 137, 139, 149, 151, 173 – 174, 189 – 190, 194 – 195, 205 – 207, 211, 217, 222, 269 – 270, 322, 348, 396, 398, 420 – 421, 438, 445, 447, 452, 454 – 456, 458, 525, 589, 591, 617 – 619, 631, 641, 655, 657, 670, 692, 696, 700, 722 – 723, 739, 744, 781 – phonological 4, 8, 10, 11, 16, 22 – 31, 33, 35 – 39, 59, 78, 104, 107, 114, 122 – 125, 131, 132, 168, 172, 230, 232, 448, 575, 579 – 580, 688 – 690, 692 – 693, 697, 700 – 704, 706, 717 – 719, 721 – 723, 728 – 729, 733 – 734, 742, 768 – 769, 775 – 776, 799, 804, 821, 915, 921, 1001, 1016, 1049 – 1053, 1055, 1077, 1081 – 1085, 1087 MT see machine translation
N narrative 166, 179 – 180, 228, 368, 373, 375, 418, 421, 425, 443, 448, 456, 483, 489, 501 – 502, 527, 626, 630, 635, 667, 674 – 675, 705, 747, 754, 790, 793, 806 – 807, 871, 966 – 968, 998, 1000, 1005, 1007, 1010, 1012, 1015 – 1016, 1027, 1029, 1032, 1048 – 1049, 1062 – 1063, 1065 – 1067 negation 64, 68, 70, 94, 130, 188, 192, 196, 223, 246, 268, 294, 300 – 301, 344, 348 – 350, 354 – 355, 357, 361, 478 – 479, 482, 519 – 521, 526 – 527, 534, 538, 541, 603, 611 – 612, 641, 671 – 673, 720, 766, 771, 773, 790, 820, 828, 832, 853, 872, 929, 937, 1031 – 1033, 1047, 1057, 1061 – adverbial 323 – concord 316 – 317, 319, 332 – 335 – head movement also see headshake, 70, 349, 357, 521, 526 – manual 223, 316 – 319, 324, 330, 766 – manual dominant 318, 521 – non-manual also see headshake, 330, 333, 349 – 350, 354 – 355, 357, 766 – non-manual dominant 318, 333, 521 negative particle 92, 94, 96, 103 – 104, 192, 293, 318 – 319, 324, 521, 832
Index of subjects negator 94, 96, 317 – 319, 323 – 324, 326, 328, 330 – 331, 333 – 335, 340, 348 – 349, 355, 521, 527, 851 neologism 219, 705, 806, 1011 – 1012, 1014 – 1016, 1019 nominal 29, 86, 88, 90, 95, 113 – 114, 119 – 120, 126, 128, 132, 148, 151, 205, 233, 271, 278, 323, 354, 360 – 361, 380, 469, 471, 476, 527, 537 – 538, 611, 727, 793, 825, 872, 1088 non-manual also see feature, non-manual, 7, 12, 22, 24, 55 – 57, 62 – 64, 68 – 70, 94, 106, 114, 117 – 120, 132, 139, 171, 187, 190 – 191, 194, 196 – 197, 199 – 201, 209, 216, 218, 221, 239, 245 – 247, 252, 259 – 260, 266, 268, 273, 275, 278 – 279, 292 – 295, 297, 302, 309, 316 – 319, 322 – 327, 330 – 331, 333 – 335, 340 – 341, 344, 349 – 350, 354 – 361, 376 – 377, 379 – 380, 424, 440, 450 – 452, 462, 472, 477 – 478, 483 – 484, 492 – 493, 501, 504 – 505, 518, 520 – 521, 525 – 527, 530, 539, 544, 562, 579, 626, 633 – 634, 637, 639, 641, 648, 661, 664, 666, 670 – 675, 707, 726, 729, 733 – 734, 765 – 766, 808 – 809, 829, 833, 843, 851, 924, 931, 937, 991, 1003, 1011 – 1012, 1015, 1031, 1040 – 1041, 1045, 1053, 1055, 1057, 1059, 1061 – 1062, 1081 – 1084, 1086 – 1089, 1094 – adverbial see adverbial, non-manual – agreement see agreement, non-manual – dominant see negation, non-manual dominant – negation see negation, non-manual – simultaneity 245 – 247, 260, 501, 520 notation also see annotation, 8, 12, 62, 143, 370 – 371, 381 – 382, 586, 895 – 896, 915 – 916, 921, 926 – 927, 1079, 1083, 1085, 1088, 1094 noun phrase 44, 119 – 120, 129 – 132, 140 – 141, 144, 171, 175, 227, 239, 265 – 269, 271, 273 – 279, 283 – 287, 293, 331, 342, 345, 347, 358, 360, 371, 382, 466 – 467, 471, 476, 480, 613, 675, 766, 807 – 808, 832, 835 noun-verb pair 83, 88 – 90, 95, 106, 807, 826 number, grammatical 84, 95, 101 – 102, 112 – 113, 119 – 125, 129 – 132, 136, 138, 140 – 141, 143 – 146, 151 – 153, 212, 216, 231 – 234, 265 – 268, 279 – 285, 287, 398, 413, 440, 501, 518 – 523, 525, 530 – 532, 538, 540 – 541, 544, 552 – 553, 555, 557, 559 – 562, 566 – 568, 577 – 579, 585, 590, 604, 608, 610, 615, 634, 638, 647 – 648, 653, 656, 658, 661 – 662, 668, 694 – 696, 698, 703 – 704, 713, 718, 728 – 730, 746, 754, 770, 775, 790, 792 – 793, 795, 797 – 805, 808 – 810, 817,
1113 820 – 823, 826, 828, 830 – 831, 836, 844, 854, 863, 866 – 868, 874, 895 – 896, 901, 903 – 904, 910, 912, 914, 918, 923 – 924, 927, 929, 933 – 935, 937 – 938, 960, 962 – 964, 967, 970, 972, 981, 987 – 992, 1000, 1006 – 1007, 1014, 1016 – 1017, 1025 – 1026, 1028, 1030, 1037, 1045, 1047 – 1049, 1051, 1067 – 1068, 1071, 1078, 1084, 1094 – 1095 number agreement see agreement, number number feature see feature, number number sign 14, 102, 112 – 113, 121, 123 – 124, 530, 585, 866 numeral 28, 84, 113, 119 – 122, 125 – 126, 129 – 132, 160, 175 – 176, 178, 232, 235 – 236, 272, 283 – 286, 482, 541, 795, 802, 803 numeral incorporation also see incorporation, 101 – 102, 112 – 113, 121 – 123, 284 – 285
O object, grammatical 13, 44 – 45, 94, 96, 125, 138 – 139, 142 – 143, 148 – 151, 176 – 177, 205 – 206, 208, 211 – 212, 215 – 219, 221, 234, 246, 248, 251 – 252, 254, 267 – 268, 273, 280, 297 – 298, 301 – 302, 304, 308, 331, 345 – 348, 350, 354, 356, 359, 372, 376, 401 – 402, 416, 443, 448, 454, 467, 469, 472, 480, 520 – 522, 526, 542, 587 – 588, 603, 610, 662 – 666, 673, 744, 832, 877, 1058, 1089, 1091 – 1092 – direct 139, 148, 212, 248, 297, 301, 467, 469, 832, 1047, 1056 – indirect 45, 139, 148 – 149, 1047 onomatopoeia 395, 400, 441 – 442, 586 operator 295, 317, 346, 348 – 349, 358 – 359, 376 – 380, 383, 465 – 466, 478 – 479, 1061 oral education see education, oral and oralism oralism also see education, oral, 911, 913, 916, 919 – 920, 922, 952 – 953, 955 – 956 orientation – morphological process 42 – 43, 45 – 46, 80, 137 – 139, 145, 150, 165, 168, 171, 176, 205 – 206, 215, 322, 336, 416, 420, 444, 451, 453, 456, 457, 593, 720, 744, 765, 767, 780, 781, 821 – phonological 6 – 8, 10, 13, 17, 22, 24, 26 – 27, 39, 42 – 43, 80, 99, 118, 171, 176, 180, 197 – 198, 231, 235 – 236, 277, 394, 505, 521, 525, 537, 575, 592, 650, 688 – 690, 728, 739, 767, 769, 775, 780 – 781, 788, 821, 949,
1114
Indexes 952, 957, 960, 971, 1013, 1049, 1051 – 1052, 1055, 1077, 1082, 1087, 1091
P pantomime 392, 627 – 630, 634 – 635, 637 parameter/parameterize 8, 22, 27, 30 – 31, 36, 45, 101 – 102, 104, 107, 165, 169, 171, 172, 230, 413, 648, 650, 652, 655, 658, 661, 688 – 692, 694, 696, 699 – 707, 768, 788, 795, 849, 855, 1001, 1003, 1010, 1013, 1016 – 1017, 1077, 1082 – 1085, 1088, 1091 paraphasia 741, 765, 768, 780 part of speech also see word class and grammatical category, 91 – 92, 95, 741, 750, 1054, 1067 passive (voice) 251, 259, 542, 867, 874, 877 past tense 33, 92, 188 – 192, 196 – 197, 495, 536, 557, 611 – 613, 669, 677, 705, 828, 877, 1027, 1046 perception 4 – 7, 17, 21 – 22, 39, 46, 61, 69, 266, 452, 457, 507, 523, 574, 576, 582, 715, 728, 732, 734 – 735, 746, 749 – 750, 752 – 753, 755, 780, 1014 perseveration see error, perseveration person 13, 33, 43, 121 – 122, 125, 136, 138, 140 – 141, 143 – 146, 150 – 153, 207, 211 – 214, 216 – 219, 234, 237, 240, 266 – 267, 269, 279 – 280, 287, 320, 336, 348, 354, 370, 378, 413, 440, 456, 501, 518, 521 – 522, 534, 565, 662, 713, 808, 874, 1055, 1058, 1089 – first 13, 122, 125, 143 – 144, 150, 213, 218, 229 – 233, 275, 277, 365, 370 – 372, 376, 379, 382 – 383, 518, 588, 608, 808, 1055 – 1056 – non-first 122, 143 – 145, 153, 218, 228, 230 – 233, 275, 808 – second 121, 214, 230 – 231, 266, 269, 275, 336, 527, 608, 1057 – third 121, 125, 230 – 231, 266, 269, 275, 280, 456, 527, 608 – 609, 1065 perspective 167, 368, 371, 374 – 376, 397, 412 – 413, 415, 418 – 427, 499 – 501, 587, 590 – 592, 635, 640, 671, 674, 707, 774 – 775, 854, 1062 PET 712, 734, 749, 752 phi-feature see feature, phi phonetic(s) 21 – 22, 31, 37 – 38, 44 – 46, 57 – 59, 61, 68, 71, 107, 123, 125, 143, 145 – 146, 150, 178, 348, 351, 361, 390 – 391, 395 – 396, 401, 403 – 404, 406, 561, 578, 586, 649 – 650, 656, 668, 700, 714 – 715, 728, 732, 742, 776,
820, 824, 852, 926, 929, 932, 1023, 1040, 1046, 1049, 1054, 1056, 1084 – 1086, 1088, 1094 – notation 8, 12, 929, 1094 – transcription 5, 926, 1046, 1086 – variation 4 – 5, 9, 14, 17 phonology 4 – 5, 7 – 9, 11 – 17, 57, 59 – 60, 62, 69, 71, 78, 80 – 83, 91, 97 – 98, 100 – 102, 105 – 107, 112, 114 – 117, 119 – 121, 127 – 128, 131 – 132, 138 – 141, 144, 146, 150, 168 – 171, 177 – 178, 193, 195, 198, 212 – 215, 219, 222 – 223, 230 – 231, 257, 310, 341, 392, 395, 413, 438, 444 – 445, 452 – 453, 456 – 457, 459, 515, 521, 525, 530, 533, 537, 544, 561, 575 – 576, 579, 580, 585 – 587, 592, 606, 633 – 635, 647 – 651, 655, 659, 676 – 677, 711 – 722, 724, 726 – 735, 743, 747, 765, 768, 770 – 771, 774 – 776, 794, 848 – 849, 852, 915 – 916, 921, 923 – 925, 928 – 930, 932, 935, 938, 986, 1016, 1023, 1034, 1040, 1045 – 1047, 1054, 1057, 1059 – 1061, 1067, 1084, 1095 – assimilation see assimilation, phonological – development 647, 650 – 651, 925 – change see change, phonological – oral component see mouth gesture – similarity 454, 459, 530, 690, 694 – 695, 698 – slip see error, phonological – spoken component see mouthing – variation 788 – 793, 795 – 796, 798 – 799, 809 – 810, 831, 1035 phonotactic 22, 28 – 29, 35, 37, 52, 396, 650, 704, 848 – 849 pidgin 40, 85, 561, 567, 842 – 844, 852 – 854, 862 – 865, 874 – 876, 878, 936, 970, 991 pinky see little finger place of articulation also see location, 7, 22 – 25, 27, 30 – 31, 33, 35, 43, 45, 114 – 115, 168, 413, 437, 442, 448, 575, 578 – 579, 583, 586, 591, 649, 720 – 721, 728 – 729, 733, 747, 791, 795, 830 plain verb see verb, plain planning see language planning plural, plurality 13, 81 – 82, 91, 96, 105 – 106, 140, 143 – 144, 200, 211, 230 – 234, 268, 270, 279 – 284, 287, 336, 534, 537, 544, 773 – 774, 872, 937, 1030 – 1031, 1088 – 1089, 1092 – collective 121, 124 – 125, 140, 143, 279, 872, 1089, 1092 – distributive 117, 121 – 122, 124 – 125, 140, 143, 1089, 1092
Index of subjects poetry 406 point, pointing, pointing sign also see deictic and pronoun – gesture see gesture, pointing – linguistic 45, 88, 92 – 94, 104, 121 – 122, 124, 139, 140 – 142, 190, 208 – 215, 217 – 218, 221 – 222, 238, 267 – 269, 271 – 274, 276 – 277, 279 – 280, 304, 351 – 353, 355, 414, 418, 424, 426 – 427, 471, 503, 505, 522, 526 – 527, 530, 533, 537 – 539, 564 – 565, 580, 584 – 585, 587 – 588, 592 – 594, 613 – 614, 663, 667, 674, 771 – 773, 809, 851, 934, 1013, 1051, 1053, 1055, 1061 – 1063, 1091 – 1092 point of view also see role shift and constructed action 69, 365, 367, 369 – 372, 376 – 377, 380 – 383, 417, 502, 637, 674, 1049, 1051, 1054, 1066 polar question see question, polar politeness 229, 491, 494, 502 – 504, 810, 1026 possessive see pronoun, possessive pragmatics 38, 62, 175, 253, 388, 412 – 413, 417, 483, 489, 771, 1023 predicate 42 – 44, 77, 84, 87, 91, 95 – 96, 104, 119, 160, 164, 175, 212 – 213, 219, 254 – 255, 279, 281, 287, 298, 309, 320, 322 – 325, 331 – 332, 335 – 336, 348, 353, 374, 376, 380, 412 – 427, 432 – 434, 442 – 459, 476, 517, 694, 608 – 611, 636 – 639, 641, 660 – 661, 669 – 670, 718, 745, 767, 793, 835, 872, 1060, 1984 priming 405, 700 – 703, 711, 717 – 719 processing 4 – 7, 22, 31 – 32, 34, 38, 172, 324, 355, 393, 415 – 416, 427, 582, 584, 626, 632 – 633, 670, 687 – 690, 696, 699 – 707, 715 – 718, 724 – 725, 730, 734 – 735, 739 – 740, 744 – 755, 763, 765 – 768, 771 – 772, 774 – 781, 848, 856, 879, 923 – 924, 929, 933, 936, 938, 989, 992, 1033 – 1034, 1045, 1076, 1078, 1080 – 1081, 1095 production 7, 17, 21 – 22, 39, 101, 114, 172, 173, 228, 248, 249, 258, 276, 325, 373, 406, 412, 416, 469, 514, 516, 574 – 576, 577 – 580, 583, 589 – 590, 592 – 594, 608 – 611, 618, 626, 628, 630, 632, 636, 638, 649 – 660, 662, 665, 667 – 670, 674, 676, 687 – 688, 699, 705, 740 – 742, 746 – 747, 749 – 750, 753, 755, 765, 769, 773, 775 – 779, 788, 804, 845 – 847, 853, 855, 964 – 970, 989, 991, 1023, 1026, 1027, 1030, 1032 – 1033, 1037 – 1041, 1055 – 1056 productive, productivity 38 – 41, 81, 83, 91, 100, 103 – 105, 119, 164 – 166, 170 – 172, 180,
1115 234, 275, 279, 322, 336, 403, 456, 459, 662 – 663, 668, 688, 705, 718 – 719, 734, 744, 767, 771 – 772, 819 – 823, 825, 835 – 836, 849, 903, 1011 – 1015, 1059, 1078 proform also see pronoun, 166, 227 – 228, 234, 240, 254, 822, 1085 prominence 5, 55 – 57, 59, 67 – 71, 119, 276, 282, 370 – 371, 462, 464, 474, 478, 480 – 481, 870 – 871, 958, 1032 pronominal, pronoun 59, 69, 84, 86, 88, 94, 96, 101, 112 – 113, 121 – 122, 124, 139, 141, 146, 175, 205, 207 – 211, 214 – 217, 219, 252, 267, 271 – 280, 287, 309, 323, 340, 348, 350 – 354, 357 – 361, 370 – 372, 376 – 380, 382 – 383, 388, 403, 405, 408, 413, 417, 440, 463, 469, 470 – 473, 480, 482 – 483, 501, 533 – 534, 541, 543, 584 – 585, 587, 588, 591, 593, 594, 604, 610, 663, 666 – 667, 674, 794, 808 – 809, 851, 934, 1056 – 1057, 1062, 1089, 1093 – collective 121 – deictic 143, 198, 227 – 228, 231, 267, 403, 527, 587, 593, 667, 851, 1061 – 1062 – distributive 121 – 122 – exclusive 215, 233, 285, 416, 474, 565, 648, 754, 779, 865 – first 122, 229 – 234, 266, 370 – 372, 376, 378 – 379, 382 – 383, 501, 587, 1055 – 1056 – inclusive 233, 285 – non-first 122, 143 – 145, 228, 230 – 233 – possessive 129, 233, 267, 269 – 270, 273, 276, 278 – 280, 287, 538, 591 – reciprocal 212, 218, 223 228, 234, 236 – 237, 239 – reflexive 228, 234, 236 – 237, 267, 277 – 280, 287, 466, 476 – relative 227 – 228, 234, 238 – 240, 309, 357 – 361 – second 121, 230 – 231, 1057 – third 121, 230 – 231 prosodic, prosody 4, 35, 55 – 58, 61 – 64, 67 – 71, 268, 293, 317, 324, 326, 341, 367, 468, 473 – 474, 478, 483 – 484, 981, 990, 1061 – constituent see constituent, prosodic – feature see feature, prosodic – hierarchy 56, 58 – model 22 – 27, 30 – 31, 37 – 38, 444, 677 – structure 22, 24, 27, 59, 61 – 62, 114, 395, 463 protolanguage 515 – 516, 545, 874 – 875
1116
Indexes
Q question 56, 62 – 67, 71, 80, 83 – 84, 89 – 90, 237 – 239, 246 – 247, 251, 259, 268, 292 – 302, 304 – 311, 325, 327, 345 – 354, 356, 359, 361, 464 – 465, 475, 479, 481, 491 – 493, 522, 526, 530, 534, 539, 541, 543 – 544, 603, 611 – 612, 641, 648, 661, 663 – 665, 671 – 673, 707, 722 – 723, 726, 808, 832 – 833, 851, 1030, 1033, 1036, 1089 – content 63 – 67, 237, 246 – 247, 292 – 294, 296 – 297, 299 – 302, 304 – 306, 308 – 311, 345 – 346, 349, 353 – 354, 526, 534, 648, 661, 663 – 665, 671 – 672, 1089 – non-manual marking 246 – 247, 292 – 297, 309, 526 – polar 63, 246 – 247, 251, 293 – 296, 300, 348 – 350, 354, 356, 361, 493, 526, 543, 641, 671, 673, 722 – 723, 726, 808, 832 – 833, 1089 – particle 247, 292, 296, 522, 543 – pronoun see question, sign – rhetorical 62, 308, 325, 479, 481 – sign 223, 292, 296 – 299, 301, 304 – 309, 526, 534, 539, 541, 544, 660, 664, 672 – 673, 794, 870 – wh- see question, content – word 80, 84, 247, 304, 307, 534, 541 – yes-no see question, polar quotation 230, 365 – 374, 377 – 382, 629, 633, 750, 1061
R rate of signing 193, 578, 594 reciprocal see verb, reciprocal recognition – automatic 1075, 1076 – 1078, 1081 – 1084 – error 1025 – interpretation 984, 989 – legal 889 – 896, 899, 903 – 904, 926, 950, 953 – 955, 1095 – linguistic 889 – 896, 899, 920, 926, 950, 954 – 955, 1047, 1095 – psycholinguistic 31, 107, 388, 408, 493, 699, 701, 704, 718, 732, 734, 750, 752 – 753, 929 recreolisation 862, 864, 879 – 881 recursion, recursive 517, 542, 610, 871 reduplication 29, 39, 77, 81, 96, 100, 104 – 106, 112 – 121, 123 – 128, 130 – 132, 140, 143, 193 – 196, 200, 217, 257, 277, 280, 282 – 284, 287, 306, 403, 537, 539, 544, 589, 809, 869, 871 – 873, 1089
reference 42, 44, 77, 84, 86 – 88, 90 – 91, 94, 118, 121 – 125, 127, 140, 144 – 146, 149 – 151, 160 – 166, 169, 171 – 178, 188 – 190, 200, 212, 222, 227 – 233, 236, 238 – 240, 253 – 254, 266 – 272, 274 – 280, 285, 287, 346, 348, 351 – 352, 360, 366, 371 – 372, 376 – 379, 382, 390 – 395, 399, 412 – 418, 420 – 427, 447, 453 – 457, 469, 470, 489, 491, 527, 530 – 532, 536 – 537, 543, 565, 584, 586 – 588, 591 – 593, 604 – 605, 609, 629, 638, 663, 666 – 671, 674 – 675, 688, 705, 717, 743, 766, 768, 773, 807, 823, 835, 847, 853, 1004, 1018, 1039, 1047, 1060, 1062 – 1063, 1066 – 1067, 1079 reflexive see pronoun, reflexive and verb, reflexive register 435, 502 – 503, 505, 580, 788 – 790, 792, 808 – 809, 955, 969, 987, 1026 – 1027, 1039 relative clause see clause, relative relative pronoun see pronoun, relative relativization see clause, relative repair see error, repair representational gesture see gesture, representational rhetorical question see question, rhetorical rhyme 128, 406, 701, 998 – 999, 1008, 1014, 1016 – 1017 – poetic 406, 998 – 999, 1008, 1014, 1016 – 1017 – syllable 128, 701 rhythm, rhythmic 28, 34 – 35, 55 – 57, 61, 105, 576, 578, 580, 629, 649, 650, 752, 998 – 1001, 1003 – 1004, 1007, 1014 – 1016, 1052 right hemisphere see hemisphere, right rightwards movement see movement, rightwards ring finger 13, 123 – 124, 232, 524 role shift also see constructed action and point of view, 152, 365, 368 – 373, 376 – 384, 397, 489, 500, 633, 638, 640, 674, 808 – 809, 1001, 1003, 1007, 1012, 1019, 1061 – 1062, 1078 root 22 – 23, 25, 30 – 31, 33, 37, 42, 46, 79, 88, 101, 150, 165 – 166, 168 – 169, 171 – 172, 179, 194, 207, 209, 321 – 322, 335, 341, 406, 432, 454, 504, 620, 817, 915, 1004, 1055
S school for the deaf see education, school for the deaf secondary sign language 513 – 514, 517, 528, 539 – 540, 543 – 544, 567, 867, 869
Index of subjects segment, segmentation 5, 7, 21, 23, 27, 29 – 32, 34 – 37, 46, 60, 81, 83, 98 – 99, 104 – 105, 519, 578, 580, 582, 616 – 617, 619, 629, 631, 657, 700, 724, 728 – 729, 796, 802, 809, 878, 1046, 1054, 1070, 1079 – 1080, 1083 semantic role see thematic role semantics 22, 29, 44, 58, 63 – 64, 66, 68 – 69, 71, 80 – 81, 84 – 85, 87, 91 – 92, 96 – 97, 100 – 105, 117 – 118, 120, 126, 128, 132, 138, 141, 149, 151, 158, 160 – 163, 170, 175 – 178, 191, 193 – 196, 200, 205, 207, 211 – 214, 217 – 222, 236, 253, 255 – 259, 268, 270, 320, 340, 356, 365, 380 – 383, 405, 407, 412 – 417, 421, 425, 427, 466, 467, 475, 478, 483, 492, 514 – 515, 538, 564, 586, 611, 613, 615, 626 – 632, 659, 670, 689, 694, 715 – 721, 724 – 725, 741 – 743, 747, 753, 765, 768, 776, 797, 799, 806, 818, 820 – 821, 834, 844, 854, 929, 938, 1010, 1013, 1023, 1026, 1031, 1049, 1055, 1058, 1060, 1076, 1087, 1089, 1093 – 1094 semantic change see change, semantic sentence, sentential – complex 63 – 65, 255, 293, 309, 340, 342, 347, 357 – 361, 376 – 377, 479, 522, 534, 610 – 611, 767, 774, 1032 – type 56, 64, 245, 251, 256, 726, 1088 – 1089, 1092 – complement see clause, complement – negation 60, 188, 316 – 320, 323 – 324, 327 – 336, 349 sequential 7, 15, 27, 29 – 32, 34, 36, 43, 60, 81 – 85, 89, 91 – 92, 95 – 97, 102 – 103, 107, 128, 131, 173, 218, 249, 321 – 322, 335 – 336, 343, 374, 519 – 520, 574, 576, 579, 582, 586, 595, 610, 629, 632, 637, 657, 669, 732, 769, 873, 1014, 1016 – 1017, 1039, 1051 shared sign language also see village sign language and the Index of sign languages, 146, 190, 423, 439, 552 – 553, 560 – 569, 603, 616, 789, 843, 893, 911, 937, 971, 981 short-term memory see memory, short-term sign space, signing space 105, 117 – 118, 121 – 132, 139 – 143, 164, 167 – 169, 177, 189, 197, 210, 217, 221 – 222, 228 – 230, 240, 266 – 267, 269, 276, 304, 403, 405, 407, 438, 455 – 456, 492, 495, 521 – 522, 524 – 525, 527, 542, 563 – 565, 569, 579, 587, 591, 594, 635, 637 – 639, 667, 697, 705, 744, 749, 791, 796, 804, 806, 809, 991, 1040, 1048, 1061, 1063, 1065, 1088 – 1089, 1091 – 1092
1117 sign language acquisition see acquisition sign language planning see language planing sign system 209, 513, 519, 535, 568, 578, 588, 614, 866, 868, 911, 982 – 983 simultaneity, simultaneous 4 – 5, 13, 16, 23, 26 – 34, 59 – 60, 64, 70, 77 – 83, 86, 91, 96 – 97, 101 – 107, 128, 164, 168, 171, 173, 195, 218, 245 – 250, 252 – 257, 260, 273, 321 – 322, 335, 343, 374, 403, 412 – 413, 422 – 427, 470, 493, 496, 501, 516, 519 – 520, 544, 564 – 565, 569, 574, 576, 579, 582, 584, 586, 595, 629, 635, 637 – 641, 657, 666, 669, 672, 675 – 676, 697, 707, 711, 715 – 718, 725, 729, 732, 734, 772, 778 – 779, 792, 795, 801, 804, 807, 810, 845 – 846, 849, 873, 922, 957, 969, 990, 1001, 1016 – 1017, 1039, 1045, 1047, 1049, 1061, 1066 – 1067, 1080, 1084, 1095 – communication 778, 801, 922, 957 – construction 255 – 257, 412 – 413, 422, 424, 427, 564 – 565, 569, 807 – morphology 77, 82 – 83, 86, 520, 595 slip also see error – of the hand 38, 575, 713, 719 – 732, 735 – of the tongue 712, 716, 729 – 731, 1024 sonority 28, 1016 space, spatial also see sign space – coding 669, 694, 697 – 698 – gestural 143 – 146, 150 – mapping 418, 489, 499, 502, 874 – referential 266, 268, 412 – 414, 587 – semantic 412 – 413, 417 – 418, 412, 427 – topographic 118, 131, 217, 412 – 416, 418, 427, 749, 773 – use of 173, 230, 266, 268, 279, 412 – 417, 424, 426, 499, 518, 558, 563, 587 – 588, 666, 749, 766, 773, 843, 854, 868, 874, 936 – 937, 968, 991, 1003, 1039, 1041 spatial syntax see syntax, spatial spatial verb see verb, spatial speech act 228, 324, 489 – 493, 1026, 1061 speech error see error speech reading 582, 916, 918 spreading 317, 325 – 326, 330 – 331, 722 standardization 800, 803, 889 – 891, 896 – 902, 905, 955, 1023 storytelling 501, 506, 540 – 541, 544, 961, 1010, 1036 stress 14, 56 – 58, 64, 67 – 69, 97, 106, 271 – 272, 274 – 275, 282, 293, 462, 466, 471, 473 – 475, 479 – 481, 483 – 484, 792, 850
1118
Indexes style 415, 435, 502, 766, 789, 804, 808 – 810, 968, 1001, 1004, 1026 – 1027, 1034, 1039 stylistic 114, 298, 788 – 790, 793, 807, 808 – 810, 970, 972, 985 subcategorization 350, 356, 716 subject 44 – 45, 65, 138 – 139, 142 – 143, 148 – 151, 176 – 177, 188, 205 – 206, 208, 212, 215 – 218, 221, 234, 246, 249 – 250, 252 – 254, 267 – 268, 275, 278, 302, 304, 306, 308, 340, 345, 347 – 348, 350 – 352, 359, 369, 371, 376 – 377, 448, 454, 468 – 469, 471 – 472, 480 – 483, 520 – 522, 533, 538, 575, 587 – 588, 603, 613, 661 – 666, 675, 741, 744, 788, 790, 807 – 808, 832, 867, 872, 875, 1047, 1058, 1089, 1092 subordination see clause, subordinate syllable 7, 21, 23, 27 – 29, 31 – 34, 37, 46, 57, 59, 64, 83, 105, 114, 127, 131, 140, 281, 395, 575, 578 – 580, 582, 590, 648, 696, 713, 720, 726, 731, 733, 741, 826, 850, 1008 synonym 435 – 436, 900 – 901 syntactic, syntax 22, 40, 44, 55 – 59, 61 – 63, 65 – 66, 69 – 71, 141, 145, 162, 171 – 172, 210, 216, 218, 245, 250, 253, 257, 287, 310, 317, 320, 341, 367, 383, 468, 473 – 474, 478, 483 – 484, 514 – 515, 520, 526, 539, 542, 564, 579, 607, 633, 648, 661, 666 – 667, 674 – 675, 677, 715, 734 – 735, 745, 766, 768, 770, 773, 777, 780, 820, 835, 843, 868, 875, 877, 928, 931, 938, 990, 1007, 1023, 1029 – 1030, 1047, 1054, 1094 – constituent 57 – 58, 62 – 63, 69, 246 – 258, 276, 282, 286, 294, 303 – 305, 310, 322, 325, 330 – 331, 340 – 345, 353, 356, 358 – 359, 434, 442 – 448, 454 – 456, 464, 466 – 468, 471 – 476, 478 – 481, 520, 533, 611, 807, 818, 921, 1030 – 1031, 1038, 1094 – spatial 648, 661, 666 – 667, 674 – 675, 745, 766, 777 – word order also see constituent, order, 67, 146, 234, 239, 245 – 260, 265 – 266, 268 – 269, 271, 277, 279, 284 – 287, 293, 296 – 297, 301, 305 – 306, 308, 341 – 342, 347 – 348, 355, 359, 462 – 464, 469 – 470, 474, 478, 480 – 481, 484, 519 – 520, 530, 538, 542, 544, 575, 588, 594, 633, 648, 661 – 664, 675, 843, 853, 864, 867, 869 – 870, 873, 911, 922, 929, 1030, 1039, 1088, 1093 synthesis 715, 1075 – 1076, 1083 – 1085, 1095
T taboo 502 – 505, 536, 543, 930 tactile sign language also see deafblind, 513 – 514, 523 – 525, 527, 545
thematic role, theta role 44, 141, 148 – 149, 246, 254, 453 – 454, 587, 608 – 609, 613, 716, 1058, 1088 – actor 220, 253, 256, 293, 367, 375, 414 – 415, 607 – 610, 744, 807, 1050, 1058 – agent 42, 44, 79, 103, 148, 161, 164, 167, 205, 221, 246, 253, 255 – 256, 370, 382, 420, 613, 662, 771 – 772 – goal 44 – 45, 149, 205 – 206, 211 – 213, 220 – 221, 372, 826, 1060 – 1061 – patient 148, 205, 220, 246, 254 – 255, 607 – 611, 613, 807 – source 44 – 45, 149, 205 – 206, 211 – 213, 220, 255, 448, 454 – 456, 1060 – 1061 – theme 44, 149, 172, 254, 382, 420, 450, 613, 662 – 663 theme also see thematic role, theme, 406, 418, 463, 466, 468, 470, 1001 – 1002, 1008 – 1009 thumb 11 – 13, 123 – 124, 269, 277, 336, 391, 393, 399, 525, 530, 658, 742, 744, 794, 1011, 1049 tip of the finger 711, 717 – 718 topic 56, 61, 63, 65, 245 – 246, 248, 250 – 252, 254, 259 – 260, 294 – 295, 299, 304, 325, 330, 345 – 346, 248 – 249, 354 – 355, 359, 403, 424, 462 – 473, 476, 478 – 479, 481 – 484, 495, 497 – 498, 502, 520, 522, 542, 641, 648, 661, 663 – 664, 666, 671, 673, 807, 809, 820, 829, 832 – 833, 847, 867, 869 – 870, 970, 1055, 1057, 1061, 1067 topicalization 245, 248, 250 – 252, 260, 809 topic-comment 246, 250 – 251, 259, 424, 468, 520, 832 – 833, 867, 870 topographic use of space see space, topographic Total Communication 803, 925, 957 transcription see notation transitive, transitivity 149, 166 – 169, 172, 177, 207, 210, 212 – 213, 222, 253, 256, 259, 273, 420, 426 – 427, 468, 564 – 565, 609, 1059 translation 103, 530, 847, 850 – 851, 895, 969, 981 – 982, 984 – 986, 990, 1001, 1013, 1029, 1031, 1049 – 1050, 1055, 1057, 1059, 1066 – 1067, 1069, 1075 – 1076, 1078, 1084, 1088, 1093 – 1095 triplication 113 – 115, 117, 128, 130 – 132 turn, turn-taking 70, 368, 489 – 490, 493 – 499, 507, 527, 790, 1023, 1027 typological, typology 22, 32 – 34, 38, 85, 112 – 113, 117, 120, 123, 133, 151, 160, 200, 210, 222, 245 – 246, 248 – 253, 258 – 260, 276,
Index of subjects 280, 353, 467, 517, 617, 836,
292 – 297, 306, 311, 316 – 317, 340, 350, 357, 361, 413, 423, 426 – 427, 436, 446, 474, 476 – 477, 479 – 481, 484, 513 – 514, 519 – 523, 542, 545, 577 – 579, 587, 594, 619, 650, 660, 713, 734, 771, 828, 831, 852, 937 – 938, 1023, 1046 – 1047
U urban sign languages see the Index of sign languages use of space see space, use of Usher syndrome 523 – 524
V variation – grammatical 545, 788, 790, 807 – lexical 788, 790, 796 – 805, 889, 898, 902, 905 – regional 797 – 799, 899 – 903, 955, 1038 – sociolinguistic 788 – 789, 791 – 792, 795 – 796, 798 – 802, 805, 807, 810, 902, 930, 1035 verb – classifier 43 – 44, 112 – 113, 124 – 127, 131 – 132, 158 – 159, 164 – 166, 168 – 175, 176 – 180, 347 – 348, 374, 412 – 413, 415 – 418, 420 – 423, 425 – 427, 432, 434, 448 – 449, 564 – 565, 594, 636 – 639, 661, 669 – 670, 718, 745, 767, 835, 1060, 1091 – agreeing also see verb, directional, 44 – 46, 82, 91, 96, 112, 124 – 125, 131 – 132, 136 – 139, 142, 144 – 146, 148, 150 – 153, 205 – 206, 216 – 217, 218, 220, 229, 231, 254, 267, 279 – 280, 328, 336, 348, 354, 371 – 372, 379, 383, 413, 447 – 449, 452 – 455, 457, 470, 499, 522, 543, 584, 586 – 587, 593 – 594, 610, 642, 661, 666 – 669, 674, 707, 771, 853, 873, 929 – backwards 149 – 150 – directional also see verb, agreeing, 413 – 414, 418, 426, 868, 1088, 1091 – indicating 229, 807 – modal 94, 187 – 188, 196 – 200, 301, 534, 818 – plain 44, 95, 138 – 139, 150, 168, 204 – 206, 212 – 213, 216 – 217, 222, 256, 322, 328,
1119 347 – 348, 447 – 449, 452 – 454, 522, 537 – 538, 588, 807 – spatial 44, 95, 138 – 139, 143, 147 – 152, 164, 215, 414, 434, 447 – 448, 454 – 456, 537, 543 – 544, 588, 648, 663, 771, 773, 874 – reciprocal 91, 96, 106, 116, 205, 212, 218, 223, 237, 543, 719 – reflexive 277 – 280 village sign language also see shared sign language and the Index of sign languages, 146, 229, 259, 423, 518 – 519, 522 – 523, 543, 545, 552, 586, 588, 603, 789, 854, 864, 867 – 868, 910, 971, 982 vision 4, 32, 37, 131, 494, 507, 523 – 524, 582 – 583, 765, 779, 933, 1082 visual contact see eye contact visual perception see perception visual salience 131
W Wernicke’s area 740, 743, 748, 754, 767 wh-cleft 310, 467, 473 – 474, 478 – 479, 481, 484, 809 whole entity classifier see classifier, (whole) entity wh-question see question, content word class also see part of speech, 77 – 78, 81, 83 – 97, 433 – 434, 533, 807, 825 – 826, 834, 848 word formation 40, 77, 96, 101, 104, 106 – 107, 179, 533, 543, 579, 606, 729, 816, 818 – 819, 824 – 826, 836 word order see constituent, order and syntax, word order working memory see memory, working
Y yes-no question see question, polar
Z zero marking 97, 113 – 115, 117 – 120, 128, 130 – 132, 143 – 145, 522
Index of sign languages A Aboriginal sign languages also see Warlpiri Sign Language, North Central Desert Sign Language, and Yolngu Sign Language, 517, 528, 535 – 539, 543 – 544, 551, 930, 947 ABSL see Al-Sayyid Bedouin Sign Language Abu Shara Bedouin Sign Language see Al-Sayyid Bedouin Sign Language Adamarobe Sign Language 158, 258, 423, 426, 439, 560 – 565, 567, 869 AdaSL see Adamarobe Sign Language Al-Sayyid Bedouin Sign Language 40, 92 – 94, 98, 102, 104, 146, 216, 258, 558, 564 – 565, 569, 588, 616, 788, 867 – 869, 874 American Sign Language 6, 8, 12 – 15, 26, 28 – 29, 33, 35 – 36, 38, 42 – 45, 56, 60 – 61, 63 – 64, 68 – 69, 80, 86, 89, 91, 95 – 96, 98 – 100, 102 – 104, 106 – 107, 113, 117 – 120, 122 – 125, 137 – 142, 147, 149 – 150, 158 – 164, 173 – 175, 187 – 190, 192 – 194, 196 – 199, 214, 216, 218, 227 – 228, 230 – 240, 246 – 247, 249 – 252, 257 – 259, 265 – 287, 293 – 310, 317, 322 – 323, 326, 329 – 332, 334, 342 – 343, 345 – 355, 357 – 361, 368 – 370, 373 – 374, 376 – 377, 388 – 389, 391 – 395, 397 – 399, 401 – 407, 415, 418, 425, 427, 432 – 435, 437 – 442, 444, 449, 458, 465, 467, 469, 471 – 483, 489, 492, 494 – 496, 498 – 507, 518 – 521, 524 – 527, 532 – 533, 536, 541, 554, 561 – 562, 575 – 581, 583 – 594, 605, 609, 637 – 638, 640 – 641, 649, 651 – 658, 660 – 677, 689 – 690, 692, 694 – 699, 701, 703, 705 – 706, 712, 717 – 719, 721, 728, 742 – 745, 747 – 750, 752 – 753, 765, 769 – 770, 775, 778, 788, 790 – 805, 807 – 810, 817, 820 – 821, 823, 825 – 826, 828 – 833, 835, 843 – 850, 853 – 855, 863, 866 – 867, 869 – 871, 873, 877, 880, 892 – 893, 898, 911, 918 – 919, 921 – 924, 929, 935 – 938, 954, 961, 967 – 968, 970, 981, 990 – 991, 1000 – 1004, 1007 – 1010, 1012 – 1015, 1017, 1030, 1035, 1049 – 1050, 1052, 1054 – 1059, 1061 – 1062, 1067, 1069, 1081, 1084, 1093 – 1094
Argentine Sign Language 189, 196, 199 – 200, 207, 210 – 211, 218, 223, 248, 285, 439, 929 ASL see American Sign Language Auslan see Australian Sign Language Australian Aboriginal sign languages see Aboriginal sign languages Australian Sign Language 13, 56, 80, 90 – 91, 95, 98 – 99, 102, 106, 142, 161, 163, 253 – 254, 259, 274, 278, 293 – 294, 307, 342 – 343, 351, 388, 397, 399 – 400, 405, 495 – 496, 575, 590, 638, 670, 788, 790, 792, 795 – 798, 801 – 803, 806 – 808, 810, 821 – 823, 826 – 827, 836, 842, 849 – 850, 852, 894, 898, 930, 934, 990, 1050 Austrian Sign Language 36, 90, 94 – 95, 109, 113, 118, 120, 128, 161, 233, 283, 294, 302, 305, 307, 342, 449, 482 – 483, 1057
B Ban Khor Sign Language 557, 560, 936 Brazilian Sign Language 137, 151, 196, 198 – 200, 207, 209 – 210, 217 – 220, 231, 237, 239, 246 – 247, 257, 293 – 294, 304 – 308, 327 – 329, 334, 474, 476, 482, 518, 588, 593, 655 – 659, 661, 663, 665 – 666, 668, 674, 676, 797, 929, 1003 British Sign Language 56, 61, 80, 85, 90, 95, 98, 101 – 102, 104, 106, 113, 117, 122, 127, 137, 161, 171, 174, 189 – 190, 193 – 194, 216, 218, 228 – 239, 246, 249 – 250, 253, 256, 259, 276, 285, 294, 325, 334, 342, 372, 399, 401, 489, 494 – 497, 499, 588 – 590, 593, 653, 655 – 660, 668 – 669, 675, 694, 696 – 697, 700 – 701, 703 – 704, 749, 751 – 752, 765 – 767, 770 – 777, 780 – 781, 788, 790 – 792, 795, 797 – 798, 800 – 801, 804, 806 – 807, 810, 821, 825, 821, 827, 848 – 852, 854 – 855, 869, 871, 896 – 898, 924, 926, 934 – 935, 983 – 984, 1000 – 1003, 1005 – 1007, 1009 – 1013, 1015, 1017 – 1018, 1029, 1050, 1059, 1065, 1082, 1093 BSL see British Sign Language Bulgarian Sign Language 936
Index of sign languages
C Cambodian Sign Language 229, 929 Catalan Sign Language 70, 137, 151, 198, 209 – 210, 217, 285, 294, 307, 318, 320 – 321, 323 – 324, 326, 331 – 335, 372, 379 – 380, 833 – 834, 894 Chilean Sign Language 929 Chinese Sign Language 325, 329 – 330, 334, 336, 388, 928, 970 CisSL see Cistercian Sign Language Cistercian Sign Language 532 – 534, 543 – 544 Colombian Sign Language 929 – 930 Congolese Sign Language 931 Croatian Sign Language 36, 231, 233, 257, 285, 294, 296, 302, 305, 307 – 308, 469 – 470, 482 CSL see Chinese Sign Language Czech Sign Language 894, 936 – 937
D Danish Sign Language 56, 91, 163, 209, 233, 246, 274, 371 – 372, 378, 388, 500, 898, 925, 935, 1055 – 1057 DGS see German Sign Language DSGS see Swiss-German Sign Language DSL see Danish Sign Language Dutch Sign Language see Sign Language of the Netherlands
E Egyptian Sign Language 931 Ethiopian Sign Language 931
F Filipino Sign Language 497, 797, 929 Finnish Sign Language 28 – 29, 51, 233, 237, 294, 307, 315, 319, 321 – 322, 336, 339, 468, 486, 652, 654 – 656, 925 – 926, 945, 988 FinSL see Finnish Sign Language Flemish Sign Language 113, 118, 122, 207, 209 – 211, 220, 253 – 254, 257, 259, 294, 325 – 326, 334, 778, 797, 898, 1038 – 1039 French-African Sign Language 935
1121 French Sign Language 85, 158, 198, 251, 259, 524, 564, 577, 791, 797, 829 – 831, 853 – 854, 866, 915 – 918, 924, 926 – 927, 935 – 936, 967 – 970, 1014, 1057
G German Sign Language 13, 61, 70, 79, 87, 102, 106, 113 – 118, 120 – 127, 137, 158, 161, 163, 166, 181, 188, 193, 197 – 200, 207 – 212, 215, 217 – 220, 236 – 239, 246 – 247, 251, 280, 283 – 284, 309, 317, 326, 331 – 336, 357 – 361, 372, 379, 421, 423, 425 – 426, 477, 521, 635 – 636, 668, 711, 719 – 722, 724 – 734, 754, 820, 831 – 832, 850, 854, 869, 871 – 872, 891, 898, 926, 933, 935, 966, 1054 – 1057, 1059, 1093 Greek Sign Language 137, 192, 196, 207, 207 – 209, 212 – 213, 217 – 220, 222 – 223, 325 – 326, 593, 851 GSL see Greek Sign Language Guyana Sign Language 930
H Hai Phong Sign Language 936 Ha Noi Sign Language 936 Hausa Sign Language 113, 189, 283, 566, 931 HKSL see Hong Kong Sign Language Ho Chi Minh Sign Language 936 homesign 40 – 41, 407, 517, 543, 545, 565, 577, 594, 601 – 603, 605 – 620, 651, 867 – 868, 875 – 876, 879, 911, 914, 1028 Hong Kong Sign Language 158, 163, 233, 253 – 254, 294, 296, 307, 320, 322, 335 – 336, 341, 343 – 349, 351 – 357, 359 – 361, 432 – 434, 437 – 441, 443 – 444, 446, 448 – 452, 456 – 458, 669, 928 HZJ see Croatian Sign Language
I Icelandic Sign Language 251, 935 Indian Sign Language 311, 551, 558, 928 Indo-Pakistani Sign Language 79, 86, 98, 102, 113, 118 – 119, 122, 130, 137, 193 – 194, 206 – 210, 213 – 214, 217 – 219, 223, 232, 237,
1122
Indexes 293 – 294, 304 – 305, 307 – 309, 319, 518, 521, 640, 797, 821, 823 – 824, 829, 928 International Sign 567, 841, 852 – 854, 925, 932, 935 – 936, 980, 990 – 992, 1009 – 1010, 1018, 1077 Inuit Sign Language 554, 564 IPSL see Indo-Pakistani Sign Language Irish Sign Language 102, 200, 251, 253 – 254, 256, 259, 326, 351, 477, 500, 803 – 807, 849, 852, 854, 935, 1042 IS see International Sign ISL see Irish Sign Language or Israeli Sign Language Israeli Sign Language 56, 59 – 61, 64 – 65, 67 – 70, 71, 78 – 80, 82, 85, 88 – 92, 94 – 95, 98, 100 – 106, 113, 120, 122, 137, 140, 171, 192, 234, 251, 294, 307, 310, 321 – 322, 327 – 328, 335, 350, 580, 639, 666, 817, 820, 828, 852, 854 – 855, 867, 869, 874, 898, 928, 935 Italian Sign Language 28, 106, 113, 117 – 118, 120, 127, 161, 187, 190 – 192, 238 – 239, 246 – 247, 253 – 255, 279, 286, 293, 295 – 296, 301 – 303, 305, 307 – 309, 318, 323, 327, 335, 355 – 359, 361, 378 – 379, 382, 390, 476, 492, 520 – 521, 524, 655, 851, 854, 927, 1010, 1030, 1050
J Jamaican Sign Language 555 Japanese Sign Language 137, 146, 161, 209 – 210, 214, 217, 219, 223, 229, 234, 237, 294, 307, 324, 336, 439, 518, 585, 587 – 588, 590, 653, 661, 898, 928, 934, 1084 Jordanian Sign Language 158, 248 – 249, 258, 318 – 319, 326 – 327, 334, 336, 425, 499 – 500, 928
K Kata Kolok see Sign Language of Desa Kolok Kenyan Sign Language 931 KK see Sign Language of Desa Kolok Korean Sign Language see South Korean Sign Language KSL see South Korean Sign Language
L Lebanese Sign Language 326 LIBRAS see Brazilian Sign Language Libyan Sign Language 931 LIL see Lebanese Sign Language LIS see Italian Sign Language LIU see Jordanian Sign Language LSA see Argentine Sign Language LSB see Brazilian Sign Language LSC see Catalan Sign Language LSE see Spanish Sign Language LSF see French Sign Language LSFA see French-African Sign Language LSQ see Quebec Sign Language
M Malian Sign Language 566 Malinese Sign Language 931 Manually-coded English 518, 843 – 844 Mardin Sign Language 981, 993 Maritime Sign Language 855, 934 Martha’s Vineyard Sign Language 40, 554, 560, 843, 893, 918, 971, 981 – 982 Mauritian Sign Language 867 – 869, 871, 874, 898 MCE see Manually-coded English Mexican Sign Language 935 Moldova Sign Language 936 Monastic sign languages also see Cistercian Sign Language, 528, 531, 544 Moroccan Sign Language 931 MSL see Mauritian Sign Language MVSL see Martha’s Vineyard Sign Language
N NCDSL see North Central Desert Sign Language Nederlands met Gebaren see Signsupported Dutch New Zealand Sign Language 98, 102, 233, 294, 307, 319, 325, 329, 336, 500, 788, 790, 792, 795 – 797, 800 – 803, 806 – 808, 810, 847, 849, 930, 934, 988 NGT see Sign Language of the Netherlands Nicaraguan Sign Language 40, 85, 194, 209, 268, 372, 395, 407, 427, 518, 543, 545, 564, 566, 577, 619, 641, 817, 863, 867, 950
Index of sign languages North American Indian Sign Language also see Plains Indian Sign Language 539 – 540 North Central Desert Sign Language 535, 537 – 540, 543 – 544 Norwegian Sign Language 40, 659, 926 NS see Japanese Sign Language NSL see Norwegian Sign Language NZSL see New Zealand Sign Language
O ÖGS see Austrian Sign Language Old French Sign Language 198, 564, 915 Original Bangkok Sign Language 936 Original Chiangmai Sign Language 936
P Paraguayan Sign Language 930 PISL see Plains Indian Sign Language or Providence Island Sign Language Plains Indian Sign Language 229, 439, 528, 539 – 544, 554 – 555 Polish Sign Language 1084 Portuguese Sign Language 935 Providence Island Sign Language 439, 555, 561 – 562, 564 – 565, 567, 569 Puerto Rican Sign Language 495
Q Quebec Sign Language 102, 246, 250, 278, 294, 326, 372, 587, 652, 676, 849, 924 – 925, 935, 967 – 968
R RSL see Russian Sign Language Russian Sign Language 87, 90, 253, 257, 326, 854 – 855, 927, 935 – 936 Rwandan Sign Language 931
S SASL see South African Sign Language Sawmill Sign Language 528, 530 – 531, 543 – 545
1123 secondary sign languages 513 – 514, 517, 528, 539 – 540, 543 – 544, 567, 867, 869 SGSL see Swiss-German Sign Language shared sign languages also see village sign languages, 146, 190, 423, 439, 552 – 553, 560 – 569, 603, 616, 789, 843, 893, 911, 937, 971, 981 Signing Exact English 578, 588, 957 Sign Language of Desa Kolok 87, 146, 158, 181, 189, 229, 522, 557 – 560, 562 – 565, 567 – 568, 573, 893, 981 Sign Language of the Netherlands 8 – 9, 11, 14 – 16, 66, 68, 113, 117, 120, 127, 137, 158, 163, 169, 171, 181, 189, 193 – 195, 200, 209 – 210, 214 – 215, 217 – 219, 222 – 223, 236, 248 – 249, 252 – 256, 258 – 259, 294, 304 – 305, 307, 343, 350 – 353, 355, 357, 388, 397 – 400, 471, 477, 490 – 491, 493, 495, 498, 504, 518 – 521, 524 – 527, 561, 580, 586, 588, 590, 660 – 662, 670, 676, 704, 797, 845 – 846, 850 – 851, 873, 889, 895 – 903, 905, 925, 927, 933, 1010, 1013, 1017 – 1018, 1029, 1059 – 1062, 1069, 1082, 1093 Sign-supported Dutch 518 SKSL see South Korean Sign Language Slovakian Sign Language 936 South African Sign Language 253, 374, 388, 425, 797, 931, 954 – 955 South Korean Sign Language 137, 388, 929, 934 Spanish Sign Language 188 – 189, 195, 294, 307, 653, 656, 703, 854, 894 SSL see Swedish Sign Language Swedish Sign Language 193 – 194, 196, 230, 251, 259, 372, 518, 524, 526 – 527, 698, 808, 893, 896, 925, 935, 954, 1029, 1066 Swiss-German Sign Language 61, 79, 253, 850, 898, 905, 927, 940, 1009
T Tactile American Sign Language 524 – 527 Tactile French Sign Language 524 Tactile Italian Sign Language 524 Tactile Sign Language of the Netherlands 524, 526 – 527 tactile sign languages 499, 513 – 514, 523 – 528, 576
1124
Indexes Tactile Swedish Sign Language 524, 526 – 527 Taiwan Sign Language 163, 209 – 210, 214 – 220, 222 – 223, 234, 286, 405, 587 – 588, 638, 849, 928 – 929, 934 TİD see Turkish Sign Language Thai Sign Language 163, 557, 936 TSL see Taiwan Sign Language Turkish Sign Language 113, 118 – 119, 122, 127, 158, 161, 163, 181, 193, 195, 294, 318 – 319, 321, 326 – 327, 334, 423, 426, 521, 928, 1057
V Venezuelan Sign Language 505, 930 VGT see Flemish Sign Language village sign languages also see shared sign languages, 146, 229, 259, 423, 518 – 519, 522 – 523, 543, 545, 552, 586, 588, 603, 789, 854, 864, 867 – 868, 910, 971, 982
W Warlpiri Sign Language 535 – 539 WSL see Warlpiri Sign Language
U Ugandan Sign Language 931 Ukrainian Sign Language 936 urban sign languages 439, 519, 568, 789, 796, 802, 812, 929, 982, 985, 994 Uruguayan Sign Language 895, 930
Y Yolngu Sign Language 535 – 537, 539, 544, 869, 874 YSL see Yolngu Sign Language
Index of spoken languages 530, 534, 537, 541, 578 – 579, 584, 586 – 587, 610, 616 – 617, 630 – 632, 635 – 638, 640, 659 – 660, 667, 676, 690, 694, 698 – 699, 716 – 718, 725, 733, 744, 747 – 749, 751, 770, 772 – 774, 776 – 777, 790, 800 – 801, 806, 808, 820 – 821, 830, 842 – 849, 874 – 876, 894 – 897, 911, 916, 921 – 922, 935, 967 – 968, 970, 980, 982 – 983, 985, 987 – 991, 1001 – 1002, 1006 – 1007, 1009 – 1010, 1012 – 1013, 1029, 1041 – 1042, 1046, 1056 – 1060, 1088 – 1089, 1093 – 1094
A Austronesian languages 32, 872
B Bainouk 140 Burmese 175, 929
C Cantonese 342, 349 – 351, 432 – 434, 440 – 441, 458 Cape Verde Creole 865 Catalan 67, 480 – 482, 894 Chinese 104, 188, 191 – 192, 222, 246, 342, 437, 521, 616, 970 Creoles 85, 561, 567, 586, 842 – 844, 852, 865, 870 – 872, 874, 880, 935 Croatian 36, 676
F
D
German 33, 36, 124, 128 – 129, 206, 211, 219, 721, 729 – 733, 763, 843, 876, 897, 965 – 966, 891, 1057, 1060 Greek 221, 272, 360, 542, 897 Gunwinggu 176 – 177, 1060
Dagaare 39 Djambarrpuyngu 537 Dutch 193, 211, 214 – 215, 218 – 219, 272, 521, 676, 845 – 846, 850, 897, 899, 903, 1029, 1039, 1059 – 1060
French 58, 60, 269, 272, 275 – 276, 278, 435 – 436, 521, 586, 664, 676, 870 – 872, 875, 891, 897, 904, 915, 935, 952, 967 – 970, 1041, 1057
G
H E Emmi 175 English 29, 32 – 33, 36, 59, 63, 67, 80, 83, 88 – 89, 92 – 93, 97, 100, 102 – 103, 121, 128 – 129, 172, 174, 189, 191, 196 – 197, 207, 221, 233, 236, 238 – 239, 258, 271 – 272, 278, 284, 286, 295, 298, 310, 347 – 348, 351, 353, 360, 369 – 370, 378, 382, 392, 398, 400, 403, 432 – 436, 438, 440 – 441, 446 – 447, 449, 456, 458 – 459, 466 – 467, 472 – 473, 475 – 476, 478 – 482, 492 – 493, 495 – 496, 504, 507, 521,
Hawaiian 33, 865, 874 Hawaiian Creole 865 Hmong 32 – 33 Hungarian 129, 246, 283, 466, 473
I Ilokano 127 – 128 Italian 37, 191, 253, 272, 274, 293, 295, 390, 717, 772, 851, 891, 897
1126
Indexes
J
R
Jamaican Creole 866 Japanese 129 – 130, 252, 302, 308, 347, 471
Reunion Creole 865 Romanian 466 Russian 442, 482, 1057
K Koyukon 177 Kwa languages 872
L Latin 83, 198, 891, 897, 1056, 1058
M Mandarin Chinese 222 Mauritian Creole 870 – 871, 875, 880 Miraña 178 – 179 Mundurukú 176
N Navajo 33, 175, 1014 Ngukurr Creole 871, 875 Norwegian 269, 275, 278, 659, 889, 926
S Saramaccan 872 – 873 Seychelles Creole 872 – 873 Shona 32 Spanish 272, 482, 498, 616 – 618, 808, 868, 894
T Tagalog 129 – 130 Tashkent 175 Terena 176 Thai 175 Tok Pisin 85 Tonga 222 Turkish 33, 128, 283, 308, 521, 616 – 619, 632, 667, 1046 – 1047
W Warlpiri 128, 246, 537 – 539 West Greenlandic 32 – 33
P Palikur 177 Pidgins 85, 561, 567, 842 – 844, 852 – 853, 864, 874 – 875, 936
Y Yidin 32