The Routledge Handbook of Theoretical and Experimental Sign Language Research 2020034675, 2020034676, 9781138801998, 9781315754499, 9780367640996


316 13 70MB

English Pages 732 [733] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Endorsement
Half Title
Series Information
Title Page
Copyright Page
Table of contents
Figures
Tables
Contributors
Preface
Editors’ acknowledgments
Notational conventions
Abbreviations of non-manual markers
Sign language acronyms
1 Sign language phonology: Theoretical perspectives
1.1 Introduction
1.2 Basic units and constraints
1.2.1 Handshape
1.2.2 Orientation
1.2.3 Location
1.2.4 Movement types
1.2.5 Two-handed signs
1.3 Signs as single segments
1.4 What about syllable structure?
1.5 Rules
1.5.1 Grammatical phonology and utterance phonology
1.5.2 Why do sign languages lack a grammatical phonology?
1.6 Iconicity
1.6.1 Discrete iconicity and gradual iconicity
1.6.2 Gradual iconicity
1.6.3 Incidental discrete iconicity
1.6.4 Recurrent discrete iconicity
1.7 Concluding remarks
Acknowledgments
Notes
References
2 Phonological comprehension: Experimental perspectives
2.1 Introduction
2.2 Perceptual sign language characteristics
2.3 Categorical perception
2.4 Linguistic experience
2.5 Acquisition perspectives
2.6 Coarticulatory effects
2.7 Conclusion
References
3 Lexical processing in comprehension and production: Experimental perspectives
3.1 Introduction
3.2 Deafness, plasticity, and the language network
3.3 Sign processing
3.3.1 Signs vs. body movements and gestures
3.3.2 A few notes about lexical access in comprehension and production
3.3.3 Lexicality, lexical frequency, and semantic effects in sign comprehension
3.3.4 Sign production
3.3.5 Iconicity: the link between meaning and form
3.4 Processing of lexical signs: sublexical units
3.4.1 Comprehension
3.4.2 Production
3.5 Cross-linguistic influences on sign language processing: bimodal bilingualism
3.6 Conclusion
References
4 Prosody: Theoretical and experimental perspectives
4.1 Introduction
4.2 Theoretical description
4.2.1 The prosodic hierarchy
4.2.2 The syllable and mora
4.2.3 Prosodic word
4.2.4 Phonological phrase
4.2.5 Intonational phrase
4.2.6 Relationship between syntactic and prosodic structure
4.3 Experimental studies
4.3.1 Perception of prosody
4.3.2 Acquisition
4.3.3 Emergence of prosodic structure
4.3.4 Neurolinguistic studies
4.4 Future directions: the relationship between audio-visual prosody and sign language prosody
4.5 Summary and conclusion
Acknowledgments
Notes
References
5 Verb agreement: Theoretical perspectives
5.1 Introduction
5.2 Properties of agreement in sign languages
5.2.1 Agreement markers
5.2.2 Verb classes and agreement
5.2.3 Agreement auxiliaries
5.2.4 Non-manual agreement
5.3 Theoretical analyses
5.3.1 Thematic approaches
5.3.2 Syntactic approaches
5.3.2.1 Foundations of a syntactic approach
5.3.2.2 Generative syntactic analyses
5.3.2.3 Clitic analysis
5.4 Conclusion
Acknowledgments
Notes
References
6 Verb agreement: Experimental perspectives
6.1 Introduction
6.2 The acquisition of verb agreement
6.3 Verb agreement tested with offline methods
6.3.1 Agreement tested in reaction time studies
6.3.2 Agreement tested in eye tracking studies
6.4 Verb agreement tested with online methods
6.4.1 ERP studies on sign language agreement – a morphosyntactic analysis
6.4.2 ERP studies on sign language agreement – an alternative analysis
6.5 Conclusion
Notes
References
7 Classifiers: Theoretical perspectives
7.1 Introduction
7.2 Typology of classifiers in sign languages
7.3 Verb root/stem analysis
7.4 Noun incorporation analysis
7.5 Analyses in terms of agreement
7.5.1 Analyses within the framework of Distributed Morphology
7.5.1.1 Classifiers as agreement markers
7.5.1.2 Gender agreement
7.5.1.3 Root compounds
7.5.2 Agreement analysis and argument structure
7.5.2.1 Projection of a verbal classifier phrase
7.5.2.2 Classifiers as heads of functional projections
7.5.2.3 Transitive-transitive alternation based on instrumental classifiers
7.5.2.4 Cross-linguistic variation: data from HKSL and TJSL
7.6 Syntactic structure of classifier predicates is built upon event structure
7.7 Semantic analyses of classifier predicates
7.8 Conclusion
Notes
References
8 Classifiers: Experimental perspectives
8.1 Introduction
8.2 Acquisition of classifiers
8.2.1 Classifier constructions in L1 acquisition
8.2.2 L2 acquisition of classifier constructions
8.3 Gesture and classifier constructions
8.4 Psycholinguistic studies
8.5 Neurolinguistic studies
8.5.1 Studies with brain-damaged participants
8.5.2 Brain imaging studies
8.6 Discussion
8.7 Summary and conclusion
Notes
References
9 Aspect: Theoretical and experimental perspectives
9.1 Theoretical foundations of aspect
9.1.1 Lexical aspect (Aktionsart/event structure)
9.1.2 Grammatical aspect
9.2 Viewpoint aspect in sign languages
9.2.1 Free aspectual markers
9.2.2 Bound markers of aspect
9.3 Event structure and reference time representation in sign languages
9.3.1 Markers of event structure
9.3.2 Experimental investigations of aspect and event structure in sign languages
9.4 Conclusion
Acknowledgments
Notes
References
10 Determiner phrases: Theoretical perspectives
10.1 Introduction
10.2 Building nouns
10.3 Building… determiner phrases?
10.3.1 Sign languages and the NP/DP parameter
10.3.2 The categorial status of pointing signs
10.4 Word order patterns
10.5 Possessives
10.6 Conclusion
Notes
References
11 Content interrogatives: Theoretical and experimental perspectives
11.1 Introduction
11.2 Theoretical perspectives
11.2.1 Positions of the interrogative signs and the leftward/rightward controversy
11.2.1.1 Doubling constructions
11.2.1.2 Single sentence-initial interrogative signs
11.2.1.3 Single sentence-final interrogative signs
11.2.1.4 Role of the non-manual markers
11.2.1.5 Long-distance extraction of interrogative signs
11.2.1.6 Sentence-final interrogative signs undergoing focus movement
11.2.1.7 A linearization account for wh-doubling constructions in Libras
11.2.1.8 Clefted question analyses
11.2.1.8.1 Interrogatives with single sentence-final interrogative signs in ASL
11.2.1.8.2 Wh-Doubling constructions in LIS
11.2.1.9 ‘No movement’ analysis
11.2.1.10 Accounts for the contrast between sign and spoken languages
11.2.2 Question particles as clause-typers
11.2.3 Form and functions of non-manual marking in content interrogatives
11.2.3.1 Markers of the scope of the [+wh] operators
11.2.3.2 Functions of individual non-manual markers
11.2.4 Multiple wh-questions
11.2.5 Embedded content interrogatives
11.2.5.1 Embedded content interrogatives as complement clauses
11.2.5.2 Rhetorical questions, wh-clefts, or question-answer clauses?
11.3 Experimental perspectives
11.3.1 Acquisition of content interrogatives
11.3.2 Emergence of content interrogatives in a homesign system
11.3.3 Emergence of grammatical non-manual markers for content interrogatives in young sign languages
11.3.4 Processing of content interrogatives
11.4 Conclusion
Acknowledgments
Notes
References
12 Negation: Theoretical and experimental perspectives
12.1 Introduction
12.2 Theoretical perspectives
12.2.1 Position of negation in the clause structure
12.2.1.1 The Final-Over-Final Constraint
12.2.1.2 SOV sign languages in light of the FOFC
12.2.1.3 SVO sign languages in light of the FOFC
12.2.1.4 Other distributions of negation in a sentence and the FOFC
12.2.2 Non-manual markers
12.2.3 Formal approaches to typological issues
12.2.3.1 Goodwin (2013): a formal syntactic typology based on where [+neg] occurs
12.2.3.2 Pfau (2016): a formal syntactic typology based on feature values
12.3 Experimental perspectives
12.3.1 Acquisition of negation by Deaf children learning ASL
12.3.2 Negation in a homesign system
12.3.3 Neurolinguistic evidence
12.4 Conclusion
Acknowledgments
Notes
References
13 Null arguments and ellipsis: Theoretical perspectives
13.1 Introduction
13.2 Earlier work on null arguments in sign languages
13.2.1 Null arguments in spoken languages
13.2.2 Lillo-Martin (1986) on null arguments in American Sign Language
13.2.3 Neidle et al. (1996, 2000) on null arguments in American Sign Language
13.3 VP ellipsis in sign languages
13.4 The ellipsis analysis of null arguments
13.5 Conclusion
Notes
References
14 Null arguments: Experimental perspectives
14.1 Introduction
14.2 Psycholinguistic studies with adults
14.3 Acquisition studies
14.3.1 Acquisition of null arguments – syntactic factors (Deaf native signers)
14.3.2 Null and overt arguments in reference tracking (Deaf and hearing native signers)
14.3.3 Adult L2 learners
14.4 Discussion and conclusion
Notes
References
15 Relative clauses: Theoretical perspectives
15.1 Introduction: the cross-linguistic investigation of relative constructions
15.2 Syntactic typologies of relativization
15.2.1 Internally-headed relative clauses
15.2.1.1 Properties of internally-headed relative clauses
15.2.1.2 Some diagnostic tests for IHRCs
15.2.1.3 Theoretical accounts of IHRCs
15.2.2 Externally-headed relative clauses
15.2.2.1 Properties of externally-headed relative clauses
15.2.2.2 Some diagnostic tests for EHRCs
15.2.2.3 Theoretical accounts of EHRCs
15.2.3 Free relatives
15.2.3.1 Properties of free relatives
15.2.3.2 Some diagnostic tests for FRs
15.2.3.3 Theoretical accounts of FRs
15.2.4 Correlative clauses
15.2.4.1 Properties of correlative clauses
15.2.4.2 Some diagnostic tests for correlatives
15.2.4.3 Theoretical accounts of correlatives
15.3 Semantic typologies of relativization
15.3.1 Restrictive relative clauses
15.3.1.1 Properties of restrictive relative clauses
15.3.1.2 Some diagnostic tests for RRCs
15.3.1.3 Theoretical accounts of RRCs
15.3.2 Non-restrictive (or appositive) relative clauses
15.3.2.1 Properties of non-restrictive relative clauses
15.3.2.2 Some diagnostic tests for NRRCs
15.3.2.3 Theoretical accounts of NRRCs
15.4 Topics and relative clauses
15.5 Conclusions
Notes
References
16 Role shift: Theoretical perspectives
16.1 Introduction
16.2 Role shift and sentential complementation
16.3 Attitude and action role shift
16.4 Non-manual marking and point-of-view operators
16.5 Context-shifting operators and indexicals
16.6 Gestural demonstrations
16.7 Multiple perspectives
16.8 Conclusion: role shift and modality
Notes
References
17 Use of sign space: Experimental perspectives
17.1 Introduction
17.2 Overview of the use of space and associated sign types
17.2.1 Abstract use of space
17.2.2 Topographic use of space
17.2.3 Overlap between abstract and topographic use of space
17.2.4 Analysis of signs that use space meaningfully
17.3 Research questions and debates arising from the use of space
17.4 Linguistic processing of referent-location associations
17.4.1 Co-reference processing
17.4.2 Processing of topographic vs. abstract use of space
17.4.2.1 Behavioral evidence
17.4.2.2 Neuroimaging evidence
17.4.3 Morphemic vs. analogue processing of location
17.5 Use of space
17.5.1 Locative expression
17.5.2 Signing perspective and viewpoint
17.6 The acquisition of spatial language in sign languages
17.7 Spatial language and spatial cognition
17.8 Conclusion
Notes
References
18 Specificity and definiteness: Theoretical perspectives
18.1 Introduction
18.2 Manual and non-manual marking
18.2.1 Lexical determiners and non-manual marking
18.2.2 Order of signs within the noun phrase
2.3 Modulations in signing space
18.3 Types of definiteness and specificity
18.3.1 Definiteness: familiarity and uniqueness
18.3.2 Specificity: scope, epistemicity, and partitivity
18.4 Discussion and concluding remarks
Notes
References
19 Quantification: Theoretical perspectives
19.1 Introduction
19.2 Lexical quantifiers
19.2.1 D-quantification
19.2.2 A-quantification
19.3 Quantificational morphology
19.4 Structural aspects of quantification
19.4.1 Tripartite structures of quantification
19.4.2 Scopal interactions
19.4.3 Quantifiers and space
19.5 Conclusions
Acknowledgments
Notes
References
20 Implicatures: Theoretical and experimental perspectives
20.1 Formal pragmatics and the theory of implicature
20.2 Experimental investigations of implicatures
20.3 Scalar implicatures in the sign modality
20.4 Scalar implicatures based on conjunction/disjunction in ASL
20.5 Acquisition of scalar implicatures: theory
20.6 Scalar implicature and age of first language acquisition: experiment
20.7 Other implicatures based on modality
20.8 Conclusions
References
21 Discourse anaphora: Theoretical perspectives
21.1 Setting the stage
21.2 The same system
21.2.1 Syntax
21.2.2 Semantics
21.2.3 Summary: pronouns in sign language and spoken language
21.3 How is space encoded?
21.3.1 Variables or features?
21.3.2 Spatial syncretisms
21.3.3 Pictorial loci
21.4 Dynamic semantics
21.4.1 Background on dynamic semantics
21.4.2 E-type theories of cross-sentential anaphora
21.4.3 Sign language contributions
21.4.4 Dynamic semantics of plurals
21.5 Conclusion
Notes
References
22 Discourse particles: Theoretical perspectives
22.1 Introduction
22.2 Discourse regulation
22.3 Coherence
22.4 Modal meaning
22.5 Conclusion
Notes
References
23 Logical visibility and iconicity in sign language semantics: Theoretical perspectives
23.1 Introduction
23.2 Logical Visibility I: visible variables
23.2.1 Variable Visibility
23.2.2 Loci as variables
23.2.3 Individual, time and world variables
23.2.4 Variables or features – or both?
23.3 Logical Visibility II: beyond variables
23.3.1 Role shift as visible context shift
23.3.1.1 Basic data
23.3.1.2 Typology: ‘Mixing of Perspectives’ vs. ‘Shift Together’
23.3.1.3 Further complexities
23.3.2 Aspect: visible event decomposition
23.4 Iconicity I: iconic variables
23.4.1 Introduction
23.4.2 Embedded loci: plurals
23.4.3 High and low loci
23.5 Iconicity II: beyond variables
23.5.1 Classifier constructions
23.5.2 Event visibility or event iconicity?
23.5.3 Iconic effects in role shift
23.6.1 Plural pronouns
23.6.2 High loci
23.6.3 Role shift20
23.6.4 Telicity
23.6 Theoretical directions
Acknowledgments
Notes
References
24 Non-manual markers: Theoretical and experimental perspectives
24.1 Introduction
24.1.1 Overview of argumentation and testing claims
24.1.2 Overview of the chapter
24.2 Historical development of the treatment of NMMs
24.2.1 Background on NMM analysis
24.3 The interaction of syntax, semantics, and prosody
24.4 Analyses of NMMs that present challenges to prosodic analyses
24.4.1 Syntactic approaches
24.4.2 Semantics
24.4.2.1 An explanation for the alternative spreading domain for brow raise in ASL
24.4.2.2 Information structure (focus) and marking syntactic derivations with prosody
24.4.2.3 A closer look at the full variety of head positions and movements
24.5 Evaluation
24.6 Experimental perspectives
24.6.1 Acquisition of NMMs
24.6.1.1 Earliest use of signs and face
24.6.1.2 Grammaticalized NMMs for syntactic purposes
24.6.2 NMMs and sign production
24.6.2.1 Trying to speak and sign at the same time
24.6.2.2 Signing rate effects on NMMs
24.6.2.3 Motion capture of NMM
24.6.3 Perception of NMMs
24.6.3.1 Eye-tracking of NMMs
24.6.3.2 Neural processing of NMMs
24.7 Summary and conclusion
Notes
References
25 Gesture and sign: Theoretical and experimental perspectives
25.1 Introduction
25.2 The visual modality in spoken language
25.2.1 Forms and functions of gestures in language
25.2.2 Role of gesture in language processing
25.2.2.1 Production
25.2.2.2 Comprehension
25.2.3 Conclusions: gesture
25.3 Sign language and language modality
25.3.1 Modality-independent and modality-dependent aspects of sign languages
25.3.1.1 Phonology
25.3.1.2 Morphology and syntax
25.3.2 Iconic and gestural elements in sign language
25.3.2.1 Iconicity
25.3.2.2 Representation of motion events in sign and gesture
25.4 Sign language, gesture, and the brain
25.4.1 Brain activation during language processing
25.4.2 Atypical signers
25.4.3 Gesture and sign processing contrasted: brain studies
25.5 Conclusions: sign language
Notes
References
26 Information structure: Theoretical perspectives
26.1 Introduction
26.2 Information structure: description and formalization
26.2.1 Strategies for topic marking
26.2.2 Strategies for focus marking
26.2.3 Information structure and the left periphery
26.3 Information structure in the visual-gestural modality: new directions
26.3.1 Focus and prominence
26.3.2 Contrast
26.3.3 Question-answer pairs
26.3.4 Doubling
26.3.5 Buoys and related strategies
26.4 Experimental research
26.5 Conclusions
Notes
References
27 Bimodal bilingual grammars: Theoretical and experimental perspectives
27.1 Introduction
27.2 Definitions: bilinguals and bimodal bilinguals
27.3 Development3
27.3.1 Separation
27.3.2 Cross-linguistic influence: the BiBiBi project
27.4 Simultaneity and blending
27.4.1 Cross-language activation: experiments
27.4.2 Code-blending
27.4.2.1 Classifications
27.4.2.2 Correlations
27.4.2.3 When does blending occur: the Language Synthesis model and beyond
27.5 Conclusion
Notes
References
28 Language emergence: Theoretical and empirical perspectives
28.1 Introduction
28.2 Theoretical accounts
28.2.1 Structure from the mind/biology
28.2.2 Structure from cultural processes
28.2.3 Structure from acquisition processes
28.3 Experimental evidence
28.3.1 Child learners and adult learners are different: evidence from artificial language learning in the laboratory
28.3.2 From a pidgin to a creole through language acquisition processes
28.3.3 When the output surpasses the input: evidence for child learning mechanisms
28.3.4 Language creation by child isolates: the case of homesign
28.3.5 Structure from human cognition: gestural language creation in the laboratory
28.3.6 Intergenerational transmission introduces structure
28.3.7 Emerging sign languages: NSL and ABSL
28.3.7.1 Word order in ABSL
28.3.7.2 Word order in NSL
28.3.7.3 Word order in gestural language creation
28.3.7.4 Discussion: is one word order the default?
28.3.8 Spatial agreement/morphology
28.3.8.1 Spatial agreement/morphology in ABSL
28.3.8.2 Spatial agreement/morphology in NSL
28.3.8.3 Discussion
28.3.9 Summary of ABSL and NSL and future directions
28.4 Conclusion
Acknowledgments
Notes
References
29 Working memory in signers: Experimental perspectives
29.1 Introduction
29.2 The architecture of phonological STM for a visuospatial language
29.3 Modality effects in phonological STM
29.3.1 Evidence from serial recall tasks
29.3.2 The role of recall direction
29.3.3 Different stages of STM processing: encoding, rehearsal, and recall
29.3.4 The role of serial maintenance
29.4 Evidence from other linguistic and symbolic WM measures
29.5 Modality effects in visuospatial WM?
29.6 Beyond modality-specific storage and recall
29.7 So where does this leave the experimental study of WM in signers?
29.7.1 Participant considerations
29.7.2 Task considerations
29.7.3 The role of WM in sign language processing
29.8 Conclusion
Acknowledgments
References
Index
Recommend Papers

The Routledge Handbook of Theoretical and Experimental Sign Language Research
 2020034675, 2020034676, 9781138801998, 9781315754499, 9780367640996

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

i

“Leading experts bring together experimental and theoretical approaches to understanding the many unique properties of sign languages. The breadth of coverage is impressive, from phonology to discourse, including bimodal bilingualism, gesture, working memory, and language emergence. This Handbook represents the current state of the art and should be required reading for anyone in the field of sign language research.” Karen Emmorey, San Diego State University “This book provides a collection of chapters summarizing major research areas in sign language linguistics, ranging from studies of classifier constructions to experiments on working memory in signers. Each chapter is written by a leading researcher in that area, addressing both the current state of the field and the types of studies that have been undertaken. As such, it is a highly useful book for anyone working in sign languages who needs a quick overview of the most current topics and the key findings in sign language typology and experimental studies of signers, both deaf and hearing. Though the chapters principally address sign languages, there is a great deal of new information on gesture, signing in hearing bilinguals who acquire both a spoken and signed language, and imaging studies of language processing.” Carol Padden, University of California

ii

iii

THE ROUTLEDGE HANDBOOK OF THEORETICAL AND EXPERIMENTAL SIGN LANGUAGE RESEARCH

The Routledge Handbook of Theoretical and Experimental Sign Language Research bridges the divide between theoretical and experimental approaches to provide an up-​to-​date survey of key topics in sign language research. With 29 chapters written by leading and emerging scholars from around the world, this Handbook covers the following key areas: • On the theoretical side, all crucial aspects of sign language grammar studied within formal frameworks such as Generative Grammar; • On the experimental side, theoretical accounts are supplemented by experimental evidence gained in psycho-​and neurolinguistic studies; • On the descriptive side, the main phenomena addressed in the reviewed scholarship are summarized in a way that is accessible to readers without previous knowledge of sign languages. Each chapter features an introduction, an overview of existing research, and a critical assessment of hypotheses and findings. The Routledge Handbook of Theoretical and Experimental Sign Language Research is key reading for all advanced students and researchers working at the intersection of sign language research, linguistics, psycholinguistics, and neurolinguistics. Josep Quer is a research professor of the Catalan Institution for Research and Advanced Studies (ICREA) at Pompeu Fabra University and leads the Laboratory of Catalan Sign Language (LSCLab). He was co-​editor of the international journal Sign Language & Linguistics. Roland Pfau is an associate professor of Sign Language Linguistics in the Department of General Linguistics at the University of Amsterdam and is co-​editor of the international journal Sign Language & Linguistics. Annika Herrmann is a professor of Sign Languages and Sign Language Interpreting and head of the Institute of German Sign Language and Communication of the Deaf (IDGS) at the University of Hamburg. She is co-​editor of the book series Sign Languages and Deaf Communities.

iv

Routledge Handbooks in Linguistics Routledge Handbooks in Linguistics provide overviews of a whole subject area or sub-​discipline in linguistics, and survey the state of the discipline including emerging and cutting edge areas. Edited by leading scholars, these volumes include contributions from key academics from around the world and are essential reading for both advanced undergraduate and postgraduate students. The Routledge Handbook of Syntax Edited by Andrew Carnie, Yosuke Sato and Daniel Siddiqi The Routledge Handbook of Historical Linguistics Edited by Claire Bowern and Bethwyn Evans The Routledge Handbook of Language and Culture Edited by Farzad Sharifian The Routledge Handbook of Semantics Edited by Nick Riemer The Routledge Handbook of Morphology Edited by Francis Katamba The Routledge Handbook of Linguistics Edited by Keith Allan The Routledge Handbook of the English Writing System Edited by Vivian Cook and Des Ryan The Routledge Handbook of Language and Media Edited by Daniel Perrin and Colleen Cotter The Routledge Handbook of Phonological Theory Edited by S. J. Hannahs and Anna Bosch The Routledge Handbook of Linguistic Anthropology Edited by Nancy Bonvillain The Routledge Handbook of Theoretical and Experimental Sign Language Research Edited by Josep Quer, Roland Pfau, and Annika Herrmann

Further titles in this series can be found online at www.routledge.com/series/RHIL

v

THE ROUTLEDGE HANDBOOK OF THEORETICAL AND EXPERIMENTAL SIGN LANGUAGE RESEARCH

Edited by Josep Quer, Roland Pfau, and Annika Herrmann

vi

First published 2021 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 52 Vanderbilt Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2021 selection and editorial matter, Josep Quer, Roland Pfau, and Annika Herrmann; individual chapters, the contributors The right of Josep Quer, Roland Pfau and Annika Herrmann to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing-​in-​Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-​in-​Publication Data Names: Quer, Josep, editor. | Pfau, Roland, editor. | Herrmann, Annika, editor. Title: The routledge handbook of theoretical and experimental sign language research / edited by Josep Quer, Roland Pfau and Annika Herrmann. Description: Abingdon, Oxon; New York, NY: Routledge, 2021. | Series: Routledge handbooks in linguistics | Includes bibliographical references and index. Identifiers: LCCN 2020034675 (print) | LCCN 2020034676 (ebook) | ISBN 9781138801998 (hardback) | ISBN 9781315754499 (ebook) Subjects: LCSH: Sign language acquisition. | Sign language–Research. Classification: LCC HV2474 .R688 2021 (print) | LCC HV2474 (ebook) | DDC 419.072–dc23 LC record available at https://lccn.loc.gov/2020034675 LC ebook record available at https://lccn.loc.gov/2020034676 ISBN: 978-​1-​138-​80199-​8  (hbk) ISBN: 978-0-367-64099-6 (pbk) ISBN: 978-​1-​315-​75449-​9  (ebk) Typeset in Times New Roman by Newgen Publishing UK

vii

CONTENTS

List of figures  List of tables  List of contributors  Preface  Editors’ acknowledgments  List of notational conventions  List of abbreviations of non-​manual markers  List of sign language acronyms  1 Sign language phonology: theoretical perspectives  Harry van der Hulst & Els van der Kooij 2 Phonological comprehension: experimental perspectives  Uta Benner 3 Lexical processing in comprehension and production: experimental perspectives  Eva Gutiérrez-​Sigut & Cristina Baus

x xvi xvii xx xxii xxiii xxv xxvii 1 33

45

4 Prosody: theoretical and experimental perspectives  Jordan Fenlon & Diane Brentari

70

5 Verb agreement: theoretical perspectives  Josep Quer

95

6 Verb agreement: experimental perspectives  Jana Hosemann

vii

122

viii

Contents

7 Classifiers: theoretical perspectives  Gladys Tang, Jia Li, & Jia He

139

8 Classifiers: experimental perspectives  Inge Zwitserlood

174

9 Aspect: theoretical and experimental perspectives  Evie A. Malaia & Marina Milković

194

10 Determiner phrases: theoretical perspectives  Natasha Abner

213

11 Content interrogatives: theoretical and experimental perspectives  Meltem Kelepir

232

12 Negation: theoretical and experimental perspectives  Kadir Gökgöz

266

13 Null arguments and ellipsis: theoretical perspectives  Carlo Cecchetto

295

14 Null arguments: experimental perspectives  Diane Lillo-​Martin

309

15 Relative clauses: theoretical perspectives  Chiara Branchini

325

16 Role shift: theoretical perspectives  Markus Steinbach

351

17 Use of sign space: experimental perspectives  Pamela Perniss

378

18 Specificity and definiteness: theoretical perspectives  Gemma Barberà

403

19 Quantification: theoretical perspectives  Vadim Kimmelman & Josep Quer

423

20 Implicatures: theoretical and experimental perspectives  Kathryn Davidson

440

21 Discourse anaphora: theoretical perspectives  Jeremy Kuhn

458

viii

ix

Contents

22 Discourse particles: theoretical perspectives  Elisabeth Volk & Annika Herrmann 23 Logical visibility and iconicity in sign language semantics: theoretical perspectives  Philippe Schlenker

480

500

24 Non-manual markers: theoretical and experimental perspectives  Ronnie B. Wilbur

530

25 Gesture and sign: theoretical and experimental perspectives  Bencie Woll & David Vinson

566

26 Information structure: theoretical perspectives  Vadim Kimmelman & Roland Pfau

591

27 Bimodal bilingual grammars: theoretical and experimental perspectives  Caterina Donati

614

28 Language emergence: theoretical and empirical perspectives  Annemarie Kocab & Ann Senghas

636

29 Working memory in signers: experimental perspectives  Marcel R. Giezen

664

Index 

685

ix

x

FIGURES

1.1 Hand internal movements apply to one set of selected fingers per sign  1.2 Downward movement as change in setting values (high > low) in three major locations (Head, Chest and Space)   1.3 Downward movement as change in setting values (high > low) in three distinctive locations (as areas)   1.4 Example of a minimal pair: a one-​vs. two-​handed sign in NGT   1.5 Two symmetrical signs (the sign on the right has alternating movement) and two asymmetrical signs (the sign on the left has similar handshapes)  2.1 Perceptual map for native and non-​native participants. The open circle indicates the position of non-signs on an action-​sign axis   2.2 Perceptual map for native deaf and native hearing participants. The open circle indicates the position of non-​signs on an action-​sign axis  2.3 Perceptual map for non-​native deaf and non-​native hearing participants. The open circle indicates the position of non-​signs on an action-​sign axis  4.1 Four signs from ASL demonstrating the following constraints: (i) the Handshape Sequencing Constraint, (ii) the Monosyllabicity Constraint, and (iii) the Symmetry Constraint. All three constraints apply to clear (a) and agree (d) while inform (b) and wife (c) adhere to constraints (i) and (ii)  4.2 Coalescence in a prosodic word in ISL  4.3 Example of non-​dominant hand spreading behavior within an ISL phonological phrase (PP)  4.4 Interrogative facial expressions for questions seeking information (wh-​questions; left) and confirmation (yes/​no-​questions; right)  x

7 9 9 12 13 38 38 39

73 74 76 78

xi

List of figures

5.1 Two agreeing forms of the verb give-​present in LSC   5.2 Agreeing form of the verb take-​care in LSC  5.3 Agreeing forms of the verbs bring and get-​surgery-​on in LSC   5.4 Forms of the verbs think and pass in LSC   5.5 Agreeing forms of the verbs C H O O S E and invite in LSC   5.6 Agreeing form of the verbs pass in LSC   5.7 Agreeing auxiliaries AUX and G I V E -​AU X in LSC   5.8 Non-​manual agreement markers in ASL   6.1 Pictures of the German Sign Language (DGS) verbs EXPLAIN ((a), orientation change only), G I V E ((b), movement change only) and C R I T I QU E ((c), movement and orientation change). The verb 1EX P LA I N 2 marks 1st person subject and 2nd person object (top) and vice versa (bottom). The verb 1C R I TI QU E 3a marks 1st person subject and 3rd person object (top) and vice versa (bottom); 3aGI V E 3b marks two 3rd person referents  7.1 Motivated sign in NGT involving an instrument as argument   7.2 Adjunction of agreement nodes to functional heads after verb movement  7.3 The classifier predicate L E G G E D.E N T I TY-​M OVE. U P in NGT  7.4 Analysis of classifier predicate L E G G ED. EN TITY-​M OVE. U P in NGT, based on Distributed Morphology  7.5 The ‘frozen sign’ E S C A L AT O R in NGT  7.6 Derivation of E S C A L AT O R in NGT  7.7 Analysis of transitive-​causative (a) and intransitive-​unaccusative (b) predicates in HKSL (adapted from Lau 2002: 135, 138)  7.8 Types of classifiers and their syntactic representations  7.9 HKSL two-​handed classifier predicate ‘file+CL w/​e //​ CL bodypart’ in (21), accompanied by non-​manual marker  7.10 Syntactic structure based on three sub-​events  8.1 Replications of stimuli in the production (a,b) and comprehension (c) studies  8.2 Replications of stimuli and some of the produced non-​BSL handshapes  8.3 Illustrations for the comprehension experiments  8.4 Replications of stimuli used in the spatial comprehension task (images after Atkinson et al. 2005): a sentence with a BSL preposition (a), a sentence with a classifier construction (b), and a picture set from which participants were to select the matching spatial setting (c)  8.5 Replications of examples of the stimuli (images after Emmorey et al. 2013) eliciting static (a,c) and dynamic (b,d) spatial relations between a Ground object (table) and a variety of Figure objects (c,d), as well as labels for the various Figure objects  xi

96 97 97 98 99 99 101 102

123 145 148 150 150 151 151 153 154 160 163 178 180 182

184

187

xii

List of figures

9.1 Full representational model of possible event structure according to Ramchand (2008)  9.2 HZJ aspectual verb pair: (a) B R I S ATI ipfv; (b) OBRI S ATI pfv  9.3 HZJ aspectual verb pair: (a) DA ROVATI ipfv; (b) DARI VATI pfv  9.4 HZJ aspectual verb pair: (a) T R A Ž I T I ; (b) NA Ć I   9.5 Representation of atelic and telic verb signs in ASL  9.6 Transitional NM (discontinuous mouth gesture of abrupt exhaling, ‘pf’) with telic event structure  9.7 Positional NM (continous mouth gesture, ‘ph’) with atelic event structure  10.1 The single, long movement of the verbal form S IT (left) as compared to the small, repeated movements of the nominal form C HA I R   11.1 Leftward Movement Analysis  11.2 Rightward Movement Analysis  11.3 Position of focus phrase vis-​à-​vis CP and TP in ASL  11.4 Structure for ASL involving movement of the int-​sign to [Spec, FocP] headed by a null copula B E and the topicalization of the remnant IP  11.5 Structure for IndSL wh-​question with stranded associate phrase P L AC E   11.6 Structure for IndSL wh-​question with associate phrase PLACE left-​adjacent to G -​W H   16.1 Hare mocking and imitating tortoise  16.2 Exhausted villagers asking shepherd boy  16.3 Left: Hare imitating tortoise. Right: Shepherd boy holding a crook and looking around  16.4 Left: Mouse falling down. Right: Lion catching mouse  16.5 Shepherd boy shouting for villagers. Left: H EY . Right: S CREAM   16.6 Left: Mouse climbing over the face of the lion. Right: Villagers running to and asking shepherd boy  17.1 DGS example of a simultaneous classifier construction used to encode the spatial relationship ‘cup on table’  17.2 TİD example of form encoding next-​to relationship in the pictured spatial scene showing three plates next to each other. The form ( -​hand for three entities) highlights the spatial relationship, while abstracting away from entity and location information in terms of iconic, semantically specific representation  17.3 Adapted image of example stimuli used by Emmorey et al. (1993) in the image generation task  17.4 Adapted image of example stimuli used by Emmorey et al. (1993) in the image transformation, or mental rotation, task  18.1 (In)definiteness non-​manual marking in NGT and LSC  18.2 Non-​specificity non-​manual marking in LSC  xii

196 203 204 204 206 207 207 214 235 235 240 243 247 248 355 364 365 369 370 371 389

389 395 395 408 408

xiii

List of figures

18.3 18.4 18.5 18.6 19.1 20.1 20.2

20.3 20.4 22.1 22.2 22.3 22.4 22.5 22.6

24.1 24.2 24.3 25.1 25.2 25.3 25.4

(In)definiteness non-​manual marking in LIS  Two R-​loci articulated on the frontal plane  Partitive construction with a specific determiner in LSC  Partitive construction with non-​specific determiner in LSC  Tripartite structures generalized  ASL coordination strategies: (a) C O ORD -​L; (b) COORD -​shift  A scheme of the experimental design for the study in Davidson (2014). The numbers (“4”) indicate how many of each trial type were seen by each participant; trials were counterbalanced using the Latin Square method such that each situation (colored cans, number of bears, etc.) only appeared in one trial type per participant. Pictures at bottom show a screenshot of what participants viewed during a single (quantifier) trial  Non-​manual differences between conjunctive interpretations (a) and disjunctive interpretations (b) of the COORD -​shift strategy  The age at time of testing, and age of first exposure to ASL, of the participants in Davidson & Mayberry (2015)  Functions of discourse particles in sign languages  Finger-​wiggle used in (3)  Palm-​up used in (3)  Palm-​up as a backchannel signal  Contrastive and additive discourse markers in DGS  Facial expressions for uncertainty on the same target sentence (‘Tim has sold his car.’) including the softening German modal particle ‘wohl’ (Herrmann 2013: 98, 130). The context triggers similar non-​manuals (brow raise (‘br’), mouth corners down (‘mc-​d’), head-​tilt forward (‘ht-​f ’), and slow head nods (‘hn’) depending on the degree of uncertainty and a frequent use of palm-​up  Division of syntactic tree into three layers  Three different productions of multiple wh-​questions. Straight vertical line = ‘gap’, on/​off power button symbol = ‘reset’ of NMM (Churng 2011: 39–​40)  Movement trajectories for the head: (a) nod; (b) thrust; (c) pull  Adapted schematic overview of different models in relation to speech and gesture production  Minimal pairs in BSL (British Sign Language)  Movement contrast between the derivationally related BSL signs KEY and LO C K V  (a) Example of ‘syntactic space’: the referent ‘film’ is located in the upper right of signing space by means of an index, but this does not map onto any real-​world location; (b) Example of ‘topographic space’ (spatialized grammar): in the predicate, the referents ‘book’ and ‘pen’ are replaced with classifiers (CL ) for ‘flat xiii

410 411 417 417 431 442

447 449 452 481 484 484 485 489

493 537 546 555 570 575 575

xiv

List of figures

25.5

25.6

25.7 26.1 26.2 26.3 26.4 26.5 27.1 28.1

29.1

object’ and ‘long thin object’, respectively, and these handshapes are located adjacent to each other and at the height in signing space of the sign ‘table’  577 Images of the brain depicting (group) fMRI activation. Regions activated by BSL perception in Deaf and hearing native signers (first and second columns, respectively) and by audiovisual speech perception in hearing non-​signers (third column). For language in both modalities, and across all three groups, activation is greater in the left than the right hemisphere, and perisylvian regions are engaged  580 Highlighted regions are those recruited to a greater extent by BSL perception than TicTac (nonsense gestures) in Deaf native signers. We interpret these regions as being particularly involved in language processing. Activation up to 5 mm below the cortical surface is displayed. Crosshairs are positioned at Talairach coordinates: X = −58, Y = −48, Z = 31. This is the junction of the inferior parietal lobule and the superior temporal gyrus  582 Areas with activation greater for processing location and motion expressions with classifier constructions than for processing lexical signs  583 Fine structure of the left periphery  596 Asymmetric coordination structure for the NGT example in (12), resulting from focus movement in the second conjunct  600 Structure of a question-​answer clause, according to Caponigro & Davidson  602 Emphatic focus doubling in the Libras example (16b): after adjunction to E-​Foc, both copies of the verb LOS E are pronounced  604 Multidominance analysis of a list buoy in example (20) (Kimmelman 2017: 44). FrSet stands for Frame Setting, a topical constituent in the left periphery  606 The Language Synthesis model  629 Illustration of unrotated (b) and rotated representations (c) of a giving event. The event is represented in (a) by a frame from the video stimulus; the diagrams beneath the event represent the signer and the signing area as viewed from above, with a semi-​circle representing the signing space in front of the signer. The movement of the verb is represented with an arrow, and the implied location of the man is marked with an X. This location is compared with the man’s location in the event to determine the rotation of the representation  654 Example of a serial recall sequence in the ASL letter span task used by Emmorey et al. (2017). Participants had to recall in order

xiv

xv

List of figures

fingerspelled letter sequences of increasing length presented at a rate of one letter per second  29.2 Example of a serial recall sequence in the ASL sentence span task used by Emmorey et al. (2017). In this task, participants were presented with increasing sets of sentences. They had to: (1) decide for each sentence in a set whether the sentence was semantically plausible or implausible (using a foot response to avoid possible effects of articulatory suppression), and (2) remember the last sign of each sentence (marked by an arrow in the figure). At the end of the sequence, they had to recall in order the final signs of each sentence  29.3 Example of a serial recall sequence in the spatial span task used by Emmorey et al. (2017). In this task, participants were presented with sequences of increasing length of normal or mirrored stimulus letters presented in different spatial orientations. Participants had to: (1) decide for each letter whether it was normal or mirrored (using a foot response to avoid possible effects of articulatory suppression), and (2) remember the spatial orientation of the top of each letter. At the end of the sequence, they had to recall in order the spatial orientation of the letters by clicking one of eight boxes on a spatial grid  29.4 Schematic representation of different WM tasks along two dimensions: serial maintenance demands (horizontal axis) and linguistic demands (vertical axis). Symbol legend: diamonds = STM tasks, triangles = complex WM tasks. Color legend: black symbols = consistent modality effects reported, gray symbols = some studies report modality effects; unfilled symbols = generally no modality effects reported 

xv

667

672

674

678

xvi

TABLES

1.1 Finger Selection specifications (‘|’ indicates head-​dependency relationship)  5.1 Classification of verbs according to their agreement possibilities  12.1 Predictions for the comprehension of non-​manual negation based on hemispheric side of lesion  18.1 Derivational bases of indefinite markers in different sign languages  20.1 Mean rejection rates for each sentence type in English and in ASL from Davidson (2014), with standard deviations following in parentheses  20.2 Mean rejection rates for each sentence type in English and in ASL from Davidson (2013), with standard deviations following in parentheses  20.3 Mean rejection rates for each sentence type in ASL by each group of signers from Davidson & Mayberry (2015), with standard deviations following in parentheses  27.1 Classifications of code-​blending types  27.2 Correlations observed between word order types, morphology, and prosody of the language strings in blending utterances within the corpus 

xvi

5 100 288 409 448 450 453 627 628

xvi

CONTRIBUTORS

Abner, Natasha University of Michigan, USA Barberà, Gemma Pompeu Fabra University, Spain Baus, Cristina University of Barcelona, Spain Benner, Uta Hochschule Landshut, Germany Branchini, Chiara Ca’ Foscari University of Venice, Italy Brentari, Diane University of Chicago, USA Cecchetto, Carlo University of Milan-​Bicocca, Italy; CNRS/​University of Paris 8, France Davidson, Kathryn Harvard University, USA Donati, Caterina Université de Paris, France Fenlon, Jordan Independent researcher 

xvii

xvi

List of contributors

Giezen, Marcel R. Basque Center on Cognition, Brain and Language, Spain Gökgöz, Kadir Boğaziçi University, Turkey Gutiérrez-​Sigut,  Eva University of Essex, United Kingdom He, Jia Chinese University of Hong Kong, China Herrmann, Annika Universität Hamburg, Germany Hosemann, Jana University of Cologne, Germany Kelepir, Meltem Boğaziçi University, Turkey Kimmelman, Vadim University of Bergen, Norway Kocab, Annemarie Harvard University, USA Kuhn, Jeremy Institut Jean Nicod, Département d’études cognitives, ENS, EHESS, CNRS, PSL Research University, Li, Jia Chinese University of Hong Kong, China Lillo-​Martin,  Diane University of Connecticut, USA Malaia, Evie A. University of Alabama, USA Milković, Marina University of Zagreb, Croatia Perniss, Pamela University of Cologne, Germany

xviii

xix

List of contributors

Pfau, Roland University of Amsterdam, the Netherlands Quer, Josep ICREA/​Pompeu Fabra University, Spain Schlenker, Philippe Institut Jean-​Nicod, CNRS, Paris, France: New York University, USA Senghas, Ann Barnard College, USA Steinbach, Markus Georg-​August-​Universität Göttingen, Germany Tang, Gladys Chinese University of Hong Kong, China van der Hulst, Harry University of Connecticut, USA van der Kooij, Els Radboud University, the Netherlands Vinson, David University College London, United Kingdom Volk, Elisabeth Georg-​August-​Universität Göttingen, Germany Wilbur, Ronnie B. Purdue University, USA Woll, Bencie University College London, United Kingdom Zwitserlood, Inge Radboud University, the Netherlands

xix

xx

PREFACE

Over the past decades, sign language linguistics has evolved extremely quickly, and it is safe to say that it is now a firmly consolidated research area that has become a recognized branch of linguistics despite its youth. This expansion notwithstanding, the results of the field are often not well-​known or not easily accessible beyond practicing sign linguists. From its inception, sign linguistics encompasses a rather diverse group of practitioners in terms of scientific background: analytical frameworks and research traditions of the available scholarship are quite divergent, as happens in the study of spoken languages as well, and this diversity oftentimes hinders access to linguistic evidence from sign language linguistics. Descriptive and applied studies objectively outnumber theoretical ones, which also hampers the exchange of findings with the broader scientific community, including formal and experimental linguistics. Furthermore, a part of the relevant publications appears in field-​specific journals and publications which are beyond the usual scope of non-​sign researchers. Sign language linguistics is an excellent research field to test linguistic theories, language universals, and the properties of the language faculty across modalities. In addition, experimental investigations, which necessarily need to be combined with theoretical studies, build an important testing ground for these theoretical analyses and offer a new perspective on language production, perception, and processing. However, given the usual encapsulation of (sub)fields in academic practice, it is often the case that the potentially relevant results from each of the areas of research are not known to the researcher in the adjacent field. For all these reasons, a comprehensive collection of state-​of-​the-​ art surveys of the most relevant topics in the field of sign language research by leading researchers from around the world was in place. The goal of this handbook on theoretical and experimental issues in sign language research is to help overcome such obstacles and to offer a bridge among areas of expertise by providing an up-​to-​date reference, not only for the field of sign language linguistics but, crucially, also beyond it. This handbook compiles the findings from the formal analysis of sign languages and from experimental research in psycho-​and neurolinguistics that take as point of departure current linguistic theorizing in the context of cognitive science. It has become clear that formal and experimental linguists need to cooperate in research, and in recent years, sign languages have become significantly more visible and valuable objects of study for xx

xxi

Preface

both fields. A comprehensive, accessible overview of the questions addressed in current research and of existing analyses should contribute to enhancing fertilization across research fields and to making progress in the respective fields, also outside sign language research. Rather than offering state-​of-​the art chapters covering fairly broad research areas (such as Morphology, Psycholinguistics), the volume showcases the results of formal linguistic theorizing and experimental work in specific sub-​areas, and to the extent possible, links the knowledge that has been gained on both sides. On the linguistic side, the focus is on specific topics that have been analyzed within formal theoretical frameworks (such as Generative Grammar), while on the experimental side, theoretical accounts are supplemented by experimental evidence gained in psycho-​and neurolinguistic studies. Consider classifiers as an example. While there are a few studies that provide convenient descriptive overviews of classifier systems in sign languages, there was to date no publication that would provide a comprehensive overview of formal theoretical approaches to the phenomenon (e.g., classifiers as elements heading functional projections, relation between classifiers and argument structure, etc.). Similarly, there was no study that would summarize the experimental findings that provide evidence for (or against) the linguistic status of classifiers (e.g., course of acquisition, breakdown in aphasia). Importantly, the chapters do not limit themselves to a review of the literature on a specific topic. The overview of the existing findings generally includes a critical assessment of the hypotheses and findings. Some topics are better studied than others, and accordingly, the length of the chapters varies. The volume intertwines chapters that take a theoretical and an experimental perspective, when existing research allows for that split. It is thus structured in four sections around the main grammar components (Phonology, Morphology, Syntax, Semantics & Pragmatics), complemented by an extra section on topics that cut across different grammar levels (Overarching topics). For some topics, there are simply no experimental studies yet that address the topic (e.g., noun phrase structure), or experimental studies are scarce, such that it did not make sense to devote an independent chapter to the experimental side (e.g., syllable). Contributors have made a clear effort to guide the reader through the scholarship accumulated in the field on a particular topic, up to the most recent publications. Since the topics reviewed are obviously often interconnected, cross-​referencing among chapters enhances their readability. From the point of view of presentation of language examples, conventions have been respected to a certain extent, but also adapted in some cases for the sake of uniformity. Still, the variants in notational conventions can be recovered in the corresponding list. With this handbook we hope to have contributed a useful tool for theoretical and experimental linguists and researchers that focus on sign languages, but also more broadly for non-​sign researchers in linguistics, psycho-​and neurolinguistics who need and will profit from up-​to-​date and comprehensive surveys of the formal and experimental work that deals directly with languages in the visual-​spatial modality.

xxi

xxi

EDITORS’ ACKNOWLEDGMENTS

We would like to thank all external reviewers for their careful reviewing of individual chapters as well as for their advice and comments on the handbook:  Enoch Aboh, Diane Brentari, Patricia Cabredo-​Hofherr, Brendan Costello, Karen Emmorey, Carlo Geraci, Matt Hall, Anke Holler, Cornelia Loos, Gary Morgan, Emar Meier, Edgar Onea, Marloes Oomen, Deborah Chen Pichler, Petra Schumacher, Felix Sze, and Malte Zimmermann. Moreover, we are very much indebted to Marloes Oomen from the University of Amsterdam and to Celine Soltau from the University of Hamburg for their helpful input. Especially, we would like to express our gratitude to Sarah Bauer from the University of Hamburg for helping us with administrative tasks and for her invaluable assistance in meticulously editing the chapters. In addition, we want to thank the Routledge editors Nadia Seemungal, Helen Tredget, Lizzie Cox, and Adam Woods for their support throughout the process. The preparation of this volume has been supported by the SIGN-​HUB project (2016–​ 2020), funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 693349. We would also like to thank the Spanish Ministry of Economy, Industry and Competitiveness and FEDER Funds (FFI2015-​68594-​P), as well as the Government of the Generalitat de Catalunya (2017 SGR 1478).

xxii

xxi

NOTATIONAL CONVENTIONS

EXAMPLE OF CONVENTION

FUNCTION /​MEANING

SIGN

signs are represented in English small caps that are an approximation of the sign’s most general or common meaning fingerspelling reduplication verb agreement; the (subscript) numbers or letters refer to loci in the signing space sign localization in signing space circumflex indicates compounding or cliticization a functional sign with more than one alternative form 1st person singular pronoun, ‘I’

S-​I -​G -​N SIGN ++

/​ SIGN +++ /​ S IG N -​rep /​ 2SIGN 1 /​ 3aS IG N 3b /​ aS IG N b /​ 1-​S IGN -​b SIGN a /​ SIGN right /​ S IG N up SIGN ^SIGN (e.g., WH O ^S OME ) 1S IGN 3

AUX 1 IX1

/​ AUX 2; NOT 1 /​ NOT 2

/​ INDE X 1 /​ PRO -​1 /​ PRO .1 /​ IX -​1 /​

1PRONOUN

I X 1pl

/​ IX -​1pl

1st person plural pronoun, ‘we’ pointing sign referring to location a 3rd person plural with arc movement 1st person possessive pronoun one sign translated into English with more than one word

IXa I X 3pl-​arc POSS 1

/​1-​P OSS

SHOW-​O FF /​ T HERE-​I S -​N OT /​ DOESN’T-​H AV E SHOW _​O F F /​ THERE_​I S _​N OT /​ D OES N ’T_​ HAV E SHOW.OFF /​ THERE.IS.N OT /​ D OES N ’T.H AVE E XIST.NOT /​ HAVE.N OT GO.IM PF V C AR -​pl

a suppletive form a sign internally modified to express a grammatical function plural marker suffixed to a noun

xxiii

xxvi

List of notational conventions

EXAMPLE OF CONVENTION

FUNCTION /​MEANING

C L (6): ‘head_​bowing’

classifier predicate with this handshape with this meaning (handshape indicated by special font, description, or letter from manual alphabet) classifier with a handshape expressing a particular meaning classifier with this handshape combined with this predicate classifier with this handshape combined with this predicate; subscript ‘i’ indicates referent bodypart /​whole entity classifier predicate with this meaning classifier with this meaning combined with this predicate classifier with this meaning combined with this predicate, agreeing with initial location 1 and final location 3 mouthing, the representation of the actual phonetic production mouth gesture role-​shift into the role of the referent localized at locus 3a, or a empty argument

C L (fist): ‘head_​bowing’ C L (s): ‘head_​bowing’ C L flower

/​ C L (round) /​ CL 1, CL.BIG -​C AR /​

C L:F ISH C +M OV E

limp_​around+CL bodypart-​i C L b: move

/​ C L we: move

C L -​vehicle-​D RIVE -​B Y 1HAND-​O V ER-​C L

:hold-​long-​thin-​object(O )3

[…] [pa] /​‘pa’ rs:3a /​RS:a /​rs-​a /​ pro /​ Æ

xxiv

xxv

ABBREVIATIONS OF NON-​MANUAL MARKERS

Non-​manual markers are listed alphabetically, based on the abbreviation that appears above the line. In examples, the length of the line indicates the scope of a particular non-​ manual marker, i.e., its onset and offset. Note that some of the abbreviations are based on the function of the marker (e.g., ‘neg’ and ‘top’) while others are based on their form (e.g., ‘hs’ and ‘br’).   aff-​hn affirmative head nod bf brows furrowed bht backward head tilt bl body lean bl-​b body lean backwards br brow raise conditional /​cond conditional marker contr contrastive focus eb eye blink eg eye gaze er eyebrows raised fe furrowed eyebrows focus /​foc focus marker hf head forward hn head nod hp-​3a head position towards 3a hs headshake ht-​forward /​ ht-​f head tilt forward ht-​left /​ ht-​l head tilt to the left ht-​right head tilt to the right hth head thrust le lowered eyebrows lean right / ​left body lean right/​left mc-​d mouth corners down

xxv

xxvi

List of abbreviations of non-manual markers

meaning meaning of the nonmanual marking neg syntactic negation marker(s) q yes/​no-​question  marker rc /​rel relative clause marker re raised eyebrows rhet.q. /​ rh-​q markers of rhetorical question rs /​ role shift sb slanted brows squint /​sq squint topic /​top /​t topic marker wh /​whq wh-​question  marker wr wrinkled nose y/​n yes/​no-​question  marker

xxvi

xxvi

SIGN LANGUAGE ACRONYMS

Some of the acronyms are based on the English name of a particular sign language (e.g., GSL for Greek Sign Language) while others are based on the name of the sign language in the respective spoken language (e.g., DGS for German Sign Language). For some of the sign languages, different acronyms can be found in the literature (e.g., DSL or DTS for Danish Sign Language /​Dansk Tegnsprog), but an effort has been made to unify the acronyms across chapters.   ABSL ASL Auslan BSL CSL DGS DSGS DSL FinSL GSL HKSL HZJ IPSL Irish SL ISL LIS LIU LSA Libras LSC LSE LSF LSFB LSQ NGT NicSL

Al Sayyid Bedouin Sign Language American Sign Language Australian Sign Language British Sign Language Chinese Sign Language German Sign Language (Deutsche Gebärdensprache) Swiss-​German Sign Language (Deutsch-​Schweizerische Gebärdensprache) Danish Sign Language Finnish Sign Language Greek Sign Language Hong Kong Sign Language Croatian Sign Language (Hrvatski Znakovni Jezik) Indopakistani Sign Language Irish Sign Language Israeli Sign Language Italian Sign Language (Lingua dei Segni Italiana) Jordanian Sign Language (Lughat il-​Ishaara il-​Urdunia) Argentine Sign Language (Lengua de Señas Argentina) Brazilian Sign Language (Língua de Sinais Brasileira) Catalan Sign Language (Llengua de signes catalana) Spanish Sign Language (Lengua de Signos Española) French Sign Language (Langue des Signes Française) French Belgian Sign Language (Langue des Signes de Belgique Francophone) Quebec Sign Language (Langue des Signes Québécoise) Sign Language of the Netherlands (Nederlandse Gebarentaal) Nicaraguan Sign Language xxvii

xxvi

newgenprepdf

List of sign language acronyms

JSL NSL NZSL ÖGS PJZ RSL SSL TİD TJSL TSL VGT ZEI

Japanese Sign Language Norwegian Sign Language New Zealand Sign Language Austrian Sign Language (Österreichische Gebärdensprache) Polish Sign Language (Polski Język Migowy) Russian Sign Language Swedish Sign Language Turkish Sign Language (Türk İşaret Dili) Tianjin Sign Language Taiwan Sign Language Flemish Sign Language (Vlaamse Gebarentaal) Iranian Sign Language (Zaban Eshareh Irani)

xxviii

1

1 SIGN LANGUAGE PHONOLOGY Theoretical perspectives Harry van der Hulst & Els van der Kooij

1.1  Introduction In this chapter, we provide an overview of the ‘phonology’ of sign languages (admittedly biased toward our own work, but with ample discussion of, and references to work by many other researchers). In Section 1.2, we will first focus on presenting constraints on sign structure. In doing so we use our own model for the overall structure of signs as a frame of reference, although our objective is not to argue for it or imply that this is the only defensible proposal.1 Then, in Section 1.3, turning to rules, we will argue that sign languages do not seem to display a type of phonological rules that is typical of spoken languages, namely rules that account for allomorphy. To explain this, we will distinguish between what we call utterance phonology and grammatical phonology. While sign languages certainly display utterance phonology, as we will show (effects of automatic assimilations, co-​articulation, and reduction processes), we will argue that we do not find what we will call grammatical phonological rules which account for allomorphic alternations, although we will consider possible objections to this claim. While the point of establishing phonological structure in signs rests on the claim that signs consist of meaningless parts (like the consonants and vowels in spoken languages are meaningless parts), in Section 1.4, we discuss views that question the alleged ‘meaningless’ character of all phonological building blocks. Recognizing meaning-​bearing units2 will provide a possible explanation for why sign languages seem to disallow allomorphic rules. We will then propose that the lack of grammatical phonological rules that regulate allomorphy in sign phonology is compensated for by rules of a different kind (which might be called phonological). Such rules account for systematic form-​meaning relationships internal to alleged ‘monomorphemic’ signs.3 Section 1.5 presents some of our conclusions.

1.2  Basic units and constraints Up until 1960, sign languages were not regarded as fully-​fledged natural languages that possess morpho-​syntactic structure and an independent level of phonological structure. Recognition of phonological compositionality, which was due to the groundbreaking work of William Stokoe (Stokoe 1960), suggested that sign languages display duality 1

2

Harry van der Hulst & Els van der Kooij

of patterning which had long been identified as a pivotal property of spoken languages (Martinet 1955; Hockett 1960). Stokoe proposed a transcription system for signs that replaced holistic drawings and verbal descriptions by a finite number of graphic symbols for what he perceived as separate meaningless parts or ‘aspects’ of the sign: the handshape, the movement of the hand, and the location in front of or on the body.4 The ideas of Stokoe were further developed in the sense that other properties of signs were added to his list of major units. Also, the major units were subsequently decomposed into distinctive features. We will provide examples of such additional proposals in subsequent subsections. While it is possible to recognize and transcribe phonetic properties of signs, it simply does not follow automatically that these distinctions have a reality in the mind of the signer in terms of storage in memory and language processing. Groundbreaking work on these issues was done by a research group at the Salk Institute during the 1970s, resulting in the Klima and Bellugi volume (1979), a must-​read for sign language researchers. Performing recall experiments, they showed that percepts of signs in short-​term memory are compositional. Importantly, they also showed that errors were in the direction of formational similarity and not meaning. Studying ‘slips of the hands’, they argued for the likelihood of compositionality in the articulatory phase (see Gutiérrez & Baus, Chapter 3). Finally, they showed that American Sign Language (ASL) users can make judgments about what they considered well-​formed or ill-​formed for ASL, which supports formal compositionality in the lexicon. We will now turn to our review of phonotactic constraints in sign languages. It is well known that spoken languages can differ quite dramatically in the constraints that specify the inventory of segments and the ways in which these segments can be combined, despite the fact that almost all constraints follow universal ‘markedness’ principles which regulate symmetry in inventories, sonority sequencing in syllable structure, and assimilation in sequences, and the general fact that ‘more complex structures’ imply the presence of ‘less complex structures’. While the phonological form of sign languages is likewise subject to phonotactic constraints, there is much less evidence for cross-​linguistic differences in this respect, although, arguably, there are currently not enough data from typological cross-​linguistic studies to know to what extent different sign languages can differ. Putting aside for the moment the question whether sign languages have units that formally or functionally can be compared to such notions as vowel, consonant, or syllable (see Section 1.4 for discussion), we would find evidence for language-​specific (context-​ free5) constraints if sign languages differed in their inventories of handshape, movement, or locations. Context-​sensitive constraints (assuming that the context does not have to be linear) would capture restrictions on the manner in which these major units can be combined to make up signs. While constraints of such kinds certainly exist, it seems to be the case that sign languages only display minor differences in terms of their inventories of the major units (as was already shown in Klima & Bellugi’s (1979) comparison of ASL and Chinese Sign Language), as well as in the ways in which these units combine into signs. But, again, only few systematic typological studies have been done.6 However, the sets of major units as found in sign languages do not seem to be random (just like vowel and consonant inventories are not random), and while this perhaps can be explained on the basis of phonetic principles of articulation and perception (as has also been argued for spoken languages), it would seem that an account in terms of smaller building blocks (features) provides insight into these inventories. Thus, even though perhaps most 2

3

Phonological structure of signs

constraints appear to generalize over sign languages as a whole (rather than differentiating between them), they nonetheless, like in spoken languages, provide a window on the compositional structure of the major sign units in terms of features. Given a feature analysis, we can, for example, make distinctions between simpler and more complex units, which allows for implicational statements. As in the case of segmental inventories of spoken languages, the occurrence of more complex units implies the occurrence of simpler and more frequently occurring units, which are often called ‘unmarked’. Given the right set of features, simpler (and thus less marked) units have a simpler featural structure (see Battison 1978; Sandler 1996b). (1)  Examples of unmarked handshapes, locations, and movements7 a.  Handshapes: B ( ), S ( ), 1 ( ) b. Locations: Neutral space, Chest c. Movements: Downward path, End contact (hand touching a location) Given that signers are able to judge whether or not a given handshape belongs to their language, we may conclude that they tap into featural co-​occurrence constraints that capture the systematic structure of the handshape inventory of their language. This ability would not be so easily accounted for if we assumed that signers simply memorize a list of ‘atomic’ handshapes, although people are certainly capable of memorizing lists and of identifying an item that is not in the list as ‘unfamiliar’. The argument from well-​ formedness gains strength as we consider larger inventories such as the inventory of all signs. Hence, in addition to studies that establish featural co-​occurrence constraints that define inventories of basic units such as handshapes, movements, and locations, we would also expect to find context-​sensitive constraints that limit how the major units can be combined within the total inventory of signs. The examples in (2) show that not all locations allow for all handshapes, movements, and orientations, and vice versa. Again, while such constraints have been observed, there is still not enough evidence to suggest that different sign languages differ systematically in this respect. (2) Examples of context-​sensitive constraints a. * Two-​handed symmetrical sign: HS (both hands): pinky+index extended ( ), MOV: rotation, LOC: cheek b. * Two-​handed asymmetrical sign: Weak hand: R-​handshape ( ), strong hand: B ( ), ORI: palm of hand, MOV: tapping (end contact) c. * One-​handed sign: HS: index+thumb extended ( ), MOV: flexing, ORI: back of hand, LOC: lower arm It would appear that some observed constraints have a natural phonetic basis in that the non-​occurring combinations either are virtually impossible to articulate or, while possible, seem phonetically ‘hard’, or even ‘painful’. However, this is not the case for the examples in (2). There is a family of general phonotactic constraints on the internal make-​up of, specifically, monomorphemic8 signs in terms of the major units of signs. These various constraints all have in common that they demand a single occurrence of each major unit per sign. We first paraphrase these constraints in an informal manner in (3a–​c) and then 3

4

Harry van der Hulst & Els van der Kooij

show how a more precise formulation motivates a feature composition of the major sign units. (3)

a. The one-​handshape constraints (1H): Each sign has only one handshape (Mandel 1981) b. The one-​location constraint (1L): Each sign has only one major location (Battison 1978) c. The one-​movement constraint (1M): Each sign has only one movement (Sandler 2011)

In subsequent sections, we will discuss and reformulate these constraints and show how they can be accounted for in a phonological model of sign structure, raising the question whether the domain of these constraints is the word (i.e., the whole sign), the morpheme, or the syllable. We will now proceed with a technical account of each of the major units. In doing so, we incorporate the idea that the major units are not a list, but rather form a hierarchical structure. This idea, introduced in Sandler (1987), was inspired by models of ‘feature geometry’ for spoken language (Clements 1985).

1.2.1  Handshape Handshape is not holistic (as in Stokoe’s original model), but rather internally complex, acknowledging a selected finger unit and a so-​called finger position unit. The selected finger unit refers to the fingers that are ‘foregrounded’ (selected), as opposed to the ‘backgrounded’ (non-​selected) ones (see Mandel 1981: 81–​84). Mostly, foregrounded fingers are the fingers that are in a specific configuration, while backgrounded fingers are folded.9 Both Finger Selection and Finger Configuration have a further finer substructure in terms of features that has been motivated in various studies (Sandler 1989; van der Hulst 1993, 1995; van der Kooij 2002); see the structure in (4). Articulator10

(4)

Finger Selection11

Thumb

Finger Configuration

FS1

Spread

[out]

FC1

[wide] FS0 [one]

Side [all]

FC0 (Aperture)

[ulnar]

[open] [close]

Flexion [curve]

Finger Selection is represented with various sets of features in different models. Some models have a feature for each finger (e.g., Sandler 1989; Corina & Sandler 1993; Uyechi 1994; Brentari 1998). The drawback of such proposals is that they overgenerate the set 4

5

Phonological structure of signs

of attested handshapes for all sign languages that have been studied to date. In a more restricted model that is in accordance with the principles of Dependency Phonology, van der Hulst (1993, 1995) and van der Kooij (2002) have proposed a system of unary phonological primes to characterize handshapes. The Finger Selection node in (4) dominates two features, [one] and [all], which can occur by themselves or in combination. If combined, a head-​dependency relationship between the two features is established (indicated by ‘|’ in Table 1.1). This allows four possible structures, which, in conjunction with the specification of the side of the hand [ulnar] gives us eight possible sets of selected fingers. In combination with Finger Configuration, these features can capture all contrastive handshapes. Table 1.1  Finger Selection specifications (‘|’ indicates head-​dependency relationship) Finger Selection

one

one | all

all | one

all

index pinky

index and middle index pinky

index and middle and ring middle and ring and pinky

all four

side: ulnar

The phonetic interpretation of [one] is that one finger is selected. In principle, this could be any finger, but the default is the index finger. The default can be overridden by specifying the Side value [ulnar] ‘pinky side’. This table does not give us a representation for the middle and ring finger occurring together or separately. Signs in which these two fingers occur together are not attested, and the two fingers occurring separately are used interchangeably (except in counting) in Sign Language of the Netherlands (NGT). The issue of separate occurrence of the middle fingers, which are rare at best, will not be discussed here.10 The position of the thumb is often predictable.11 If Finger Configuration specifications for aperture are specified, its position is ‘opposed to the selected fingers’, as in ‘closed baby-​beak’ ( ). In the initial posture of an opening movement, the thumb often restrains the selected fingers. The feature [out] must be specified, however, when the thumb is the only selected digit or when it co-​occurs with other selected fingers, in which case configuration features apply to all selected fingers, including the thumb. Finger Configuration features specify the ‘position’ of the selected fingers. The central (head) unit of Finger Configuration is taken to be Aperture as it is found to be the most frequent configuration of selected fingers. The aperture features [open] and [close] refer to the opposition relation between the selected fingers and the thumb. Unlike the combination of features in the Finger Selection node, resulting in different sets of selected fingers, combinations of the two aperture features do not encode intermediate degrees of opening, but rather are interpreted dynamically, as opening (closed to open) and closing (open to closed) movements. So-​called hand-​internal movements are, like all other movements, represented as a sequence of features resulting from a branching node (such as aperture). We take the difference between the simultaneous interpretation of [one] and [all] vs. the sequential interpretation of [open] and [close] to be a manifestation of the difference between the head and dependent status of Finger Selection and Finger Configuration, respectively. For flexion of the fingers, we found a single feature [curve] to be sufficient. Handshapes that apparently have flexion of the base joint only are either analyzed as alloshapes of 5

6

Harry van der Hulst & Els van der Kooij

a handshape without base joint flection (as, possibly, in the ASL sign GROW , in which the palm of the whole flat hand or the palm side of the fingers is level with the (horizontal) top of the indicated referent), or as manifestations of path movements (as in the NGT sign C OME-​H ERE ). This implies that ‘handshape’ is not a phonological unit but rather an abbreviation for the visual manifestation of the Finger Selection and Finger Configuration features. The default option of [curve] means that all joints are bent to indicate a round shape. So-​called ‘claw’ shapes, in which only the non-​base joints are flexed, are treated as tensed versions of curved shapes or resulting from contact with the Place. Finally, Width also has only one feature and is only used when the selected fingers need to be visually separated (abducted) so that they can be used as individual entities (in the signs WALK or FIVE ) or when the selected fingers are spread apart to express the meaning of vastness (as in ROOM ). Our hypothesis is that the sets of selected fingers in combination with the configuration features in (4) can capture the contrastive handshapes in most or all sign languages (but see, for example, Nyst (2007)). This can only be established by comparing phonological analyses of various sign languages along the same lines. Some sign languages appear to have a more complex handshape inventory, mostly as a result of activating multiple finger selection or configuration features.12 However, meaning can often be associated with these complex handshapes (e.g., in borrowings from Chinese fingerspelling or character signs (Fischer & Gong 2010), or the male-​female paradigm in several Asian sign languages such as Japanese Sign Language (Peng 1974) and Taiwan Sign Language (Tsay & Myers 2009)). These complex handshapes could in principle be accommodated by complex (i.e., branching) head nodes, a solution that may be licensed by morphologically complex structures. The question of which phonetic properties of all occurring handshapes (and other properties of signs for that matter) fall under the purview of phonological specification is addressed in Section 1.4 below, where we argue that some phonetic properties are direct iconic projections from the sign’s meaning. As the examples in (5) demonstrate, the one-​handshape constraint must allow that certain changes in the handshape (hand-​internal movements) are possible (such as opening, closing or bending), but these are restricted to movement of the joints of the selected fingers, as specified in the aperture node. The examples in (5)  are from NGT and are illustrated in Figure 1.1. (5) Handshape changes a. Aperture change: closing –​ N ICE : index finger selected b. Aperture change: opening –​ S PRIN G : all fingers selected c. Aperture change: closing + curve –​WANT : all fingers selected

6

7

Phonological structure of signs

NICE

SPRING

WANT

Figure 1.1  Hand internal movements apply to one set of selected fingers per sign (© NGT Global SignBank: Crasborn et al. 2019)

A more precise formulation of the 1H-​constraint is therefore: (6) The one-​handshape constraint (revised): Each sign has only one selected finger specification (Mandel 1981) We will later discuss whether this is a constraint on the phonological complexity of morphemes or on syllables. If the notion of syllable is invoked (see Section 1.4), it is often observed that morphemes tend to be monosyllabic.

1.2.2  Orientation Stokoe’s notation introduced various symbols for specifying the orientation of the hand (mainly referring to the articulatory positions (rotation) of the lower arm). Later, researchers introduced orientation as a major sign unit that Stokoe found to be distinctive in some signs (the standard reference is Battison (1978); see van der Hulst (to appear) for details). Several sign phonologists took orientation to be the absolute direction (in space) of the palm and/​or the fingers. Whereas this may be a possible distinction, it was observed that often, there is much variation in the so-​called absolute orientation (e.g., ‘palm down’ or ‘fingertips contralateral’) in actual signing. Moreover, the description in 7

8

Harry van der Hulst & Els van der Kooij

terms of absolute orientation cannot capture the similarity in underlying forms of various inflecting predicative signs (such as NGT VISIT ). This has led to a proposal of describing orientation in relative terms, i.e., as the part of the hand that points in the direction of the end of the movement (the final setting) or toward the specified location (Crasborn & van der Kooij 1997).13 Relative orientation specifies a part of the hand regardless of its specific configuration and includes palm side, back of the hand side, radial (thumb) side, ulnar (pinky) side, tips of selected fingers, and wrist side of the hand.14 Sandler (1989) has proposed to group orientation under a node ‘hand configuration’, which corresponds to what we called the articulator in (4). In (7), we rename the original articulator node as ‘handshape’ and use the label ‘articulator’ for the higher node, which dominates handshape and orientation. Articulator

(7) Orientation {palm, back, radial, ulnar, tips, wrist}

Handshape

Finger Selection

Finger Configuration

The evidence for this grouping is based on assimilation phenomena, discussed in Sandler (1987), in lexicalized compounds in ASL and Israeli Sign Language (ISL), which show that either the orientation spreads by itself, or combined with the entire handshape. Since orientation can spread by itself, van der Hulst (1993) took this to be the dependent node (on the assumption that dependent nodes have greater ‘mobility’ than head nodes). We will assume that the orientation node can dominate a branching feature specification (e.g., prone and supine). A  double specification for Orientation then characterizes an orientation change, as in, e.g., ASL D IE, in which the lower arm rotates.

1.2.3  Location Location (or place) is one of the major components of signs. The phonetic-​perceptual requirement for objects to move to be optimally visible leads to the fact that hands move during most (perhaps all) signs,15 which seems to contradict the one-​location constraint in (3b), according to which each sign has one major location. To understand what the one-​location constraint tries to capture, the notion of location needs to be broken down into a major location unit and a so-​called setting unit. The major locations proposed by Battison (1978) (such as body, hand, arm, head, neck, and neutral space) are the ‘areas’ within which the hands move. Based on the observation that movements of monomorphemic signs stay within these major locations, Sandler (1989) proposed setting features to specify more specific locations within the major place of articulation. Sets of setting features may represent the movements within major locations such as [high] and [low], as is illustrated by the NGT examples in (8); see Figure 1.2. (8) Changes in setting values a. Head high>low: S ERIOU S b. Chest high>low: H U MAN c. Space high>low: H OME 8

9

Phonological structure of signs

SERIOUS

HUMAN

HOME

Figure 1.2  Downward movement as change in setting values (high > low) in three major locations (Head, Chest and Space) (© NGT Global SignBank: Crasborn et al. 2019)

If we look more closely at the set of movements in relation to the various places of articulation, the one-​location constraint appears to hold not only for the major locations that Battison identified: the head, the torso, and the nondominant hand. Rather, there is a slightly larger set of major locations including cheek, ear, etc., if we define (Major) Location to be a distinctive area rather than a spot within which the hand typically moves. Examples from NGT are given in (9) and illustrated in Figure 1.3. (9) a. Cheek high>low: b. Ear high>low: c. Nose high>low:

FATH ER D IS OBED IEN T 16 S K ILLFU L

FATHER

DISOBEDIENT

SKILLFUL

Figure 1.3  Downward movement as change in setting values (high > low) in three distinctive locations (as areas) (© NGT Global SignBank: Crasborn et al. 2019)

9

10

Harry van der Hulst & Els van der Kooij

We thus have to reformulate the one-​place constraint: (10) The one-​place constraint (revised): Each sign has only one (major) location specification As a result of being interpreted both as spots, areas, or a combination of the two, location inventories that have been proposed vary in size. In several models that describe ASL, we find an abundance of location features (e.g., Liddell & Johnson 1989). This is due to the fact that every phonetic location that is touched or referred to by the articulator in any sign is deemed evidence for a location feature specification in these models. In our view, a set of location features is only achieved through analysis of the contrastive status of the location (as a distinctive property of the sign) and by unraveling phonetic predictability patterns and the role of iconicity (see van der Kooij 2002). While Sandler (1989) proposed a set of setting features to specify more specific sublocations within the major place of articulation (e.g., [high], [low], [contact]), it would seem that a single setting specification must be used to indicate a specific point within the distinctive location, if this sublocation is not identifiable as a default sublocation. Examples of such cases occur when a specific meaning associated to that point or ‘landmark’ (eye, ear, temple, heart, etc.) is expressed. However, we will argue below that in such cases, meaning association in specific settings within the distinctive location calls for a morphological analysis in combination with phonetic pre-​specification of the specific setting. By interpreting location as a distinctive area in the lexical specification of a sign, the variation space of that sign is accounted for either with reference to the notion default setting or to an iconically motivated choice that relates to the meaning of the sign (see Section 1.6). In the latter case, we propose that the specific meaning-​bearing setting value is pre-​specified in the lexicon. When two setting features are specified, this implies a path movement, a movement of the hand. By pairing up two setting values (high-​low, contra-​ipsi, proximal-​distal, etc.), a path movement within the distinctive locations (as areas) can be formally described without postulating a movement unit as such. For example, a branching setting may be ‘high-​low’, as illustrated in (11), referring to a sign that follows a downward path movement. (11)

The formal specification of setting as a dependent of the head (location) accounts for the interpretation of the size of movement as relative to the size of the location. For instance, a downward path on the cheek may be smaller than a downward path in neutral space. Given the distinction between major place and settings, the observed limitation on path movement could be stated as a restriction on setting specifications, as in (12):

10

11

Phonological structure of signs

(12) The one-​(path)movement constraint: Each monomorphemic sign has only two setting specifications. The one-​movement constraint has not been formulated explicitly in the literature (but see Sandler (1999) for the idea). Meanwhile, it has certainly been noted that signs seem to require a movement in order to be perceived (cf. Sandler 1996a). As just noted, the reduction of the movement constraint in (3c) to the setting constraint in (12) raises the question (or perhaps suspicion) as to whether path movement is an independent major unit or can be derived from settings. The main reason for granting movement the status of a formal unit is that path movements can have different properties involving the trajectory of the path, as we will see in the next section.17

1.2.4  Movement types As discussed, a movement stemming from two setting specifications is called a path movement.18 Other ‘smaller’ changes result from double specifications for orientation or aperture; these are local movements (13b). (13)  Types of movement: a.  Path movement b.  Local movement i.   Aperture change (hand-​internal movement) ii.  Orientation change In lexical signs, local movements and path movements can occur independently, or combined, in which case both are executed simultaneously, meaning that their beginnings and ends occur at the same time.19 Many signs consist of one simple path (interpolations between two settings, high-​low being the most frequent). But path movement trajectories can take different shapes; they can be straight, circular, curved, or zigzag. The circular and some of the curved movements are best analyzed in terms of movement features, i.e., [circle], [curve], which means that we need a unit in the sign structure where such features can be specified. Below, in (21) we will make a proposal with respect to where in the sign structure movement features are placed, which involves the introduction of a manner node.20 Path movements that are not simple interpolations between two settings can furthermore display what is sometimes called a secondary movement. For example, a path movement during which the hand makes small rotations or closing movements. These can be analyzed as repeated local movements (i.e., repeated aperture change and orientation changes) that can co-​occur with a path movement, although they can also occur by themselves when there is no path movement and the hand is held in one position (Perlmutter 1992). In this case, the manner node will be specified with a feature such as [repeat]. Verbal descriptions of these repeated secondary movements include ‘nodding’, ‘hooking’, ‘twisting’, etc. (see Liddell & Johnson (1989) and van der Hulst (1993: § 2.2.2) for more discussion).21

11

12

Harry van der Hulst & Els van der Kooij

1.2.5  Two-​handed  signs Given that we refer to the hand as the articulator, it is remarkable that sign languages have access to two manual articulators. While, one might expect that the choice of one or the other hand could be contrastive, there is no evidence for this.22 Signers will typically use their preference hand to sign, but can easily switch to the other hand when this is necessary or convenient, without any change in lexical meaning. However, many signs are specified in the lexicon as being made with two hands; the proportion of one-​and two-​ handed signs can vary from sign language to sign language, but can be as high as 50% (Battison 1978). We note that the choice between using one hand or two is only ‘weakly’ contrastive; an example from NGT is given in Figure 1.4.

CHOOSE

SOCK

Figure 1.4  Example of a minimal pair: a one-​vs. two-​handed sign in NGT (© NGT Global SignBank: Crasborn et al. 2019)

In the class of two-​handed signs, we find that the possibility for the two hands being different or doing different things is severely constrained. Two general co-​occurrence constraints have been proposed for two-​handed signs (Battison 1978; Napoli & Wu 2003; Eccarius & Brentari 2007; Morgan & Mayberry 2012): (14)

a. The symmetry condition: In two-​handed signs in which both hands act as active articulators, both hands have the same handshape, the same (or opposite) orientation, the same movement (synchronic or asynchronic, called alternating), and the same location. Two NGT examples are provided in Figure 1.5a. b. The dominance condition: In two-​handed signs in which one hand (the passive or weak hand) is the location for the other (active or strong) hand, the location hand either has the same handshape as the active hand or is selected from a small set of ‘unmarked’ handshapes. Two NGT examples are provided in Figure 1.5b.

12

13

Phonological structure of signs

JUDGEVERB

SAME

GREEN

POLITICS

Figure 1.5  Two symmetrical signs (the sign on the right has alternating movement) and two asymmetrical signs (the sign on the left has similar handshapes) (© NGT Global SignBank: Crasborn et al. 2019)

Two-​handed signs thus come in two broad types. For signs in which both hands act as active articulators, called symmetrical, one might simply adopt the feature ‘[2-​handed]’ and, when relevant, [alternating], while signs in which the hand acts as a location, called asymmetrical, require [hand] to be one of the location features (cf. Sandler 1989). The specific relation between the two hands may also be a relevant property (van der Kooij 2002). The relation between the hands is captured by the orientation features of the strong hand. The orientation is interpreted with respect to the weak hand as the location in asymmetrical signs and with respect to same part of the hand as indicated by the orientation feature in symmetrical signs. For some two-​handed signs, specific hand interaction features may be required such as [crossed], [below] or [above], in relation to the other hand, and so on. If such specifications are required, we must identify a separate node for the weak hand (albeit for different reasons than those given in van der Hulst (1996)23) and the spatial arrangement of the two hands appears to be always motivated.

1.3  Signs as single segments Stokoe’s suggestion was that the major units of signs are comparable to phonemes, which led him to say that a big difference between spoken and sign languages is that in the former, phonemes occur sequentially, while sign phonemes (‘cheremes’ in his terminology) occur simultaneously. However, van der Hulst (1993) points out that the major 13

14

Harry van der Hulst & Els van der Kooij

units compare more closely to what in spoken phonology are called class nodes (which are subunits of ‘phonemes’), such as Manner, Place, or Laryngeal (see Clements 1985). If this is the right analogy, this implies that monomorphemic signs are ‘monosegmental’, i.e., contain the equivalent of a single phoneme in spoken language (see Channon (2002) for a similar view). In this section, departing from a general characterization of articulation that generalizes over both spoken and sign language, we will argue that signs are indeed phonologically analogous to single segments (‘phonemes’) in spoken languages. From a production point of view, we can say that the articulation of a single segment in both spoken and sign language involves an action by an actor at some location (15). (15)

ACTOR Active articulator

ACTION Manner

LOCATION Place of articulation

In spoken language, there is a close correlation between articulator choice and choice of location, which results in the fact that models for spoken segments adopt only one node that is called either ‘articulator node’ or ‘place node’.24 As a result, phonologists have come to use ‘articulator’ and ‘place’ terminology almost interchangeably, referring to a /​k/​as either ‘velar’ (= place) or ‘dorsal’ (= articulator) without discrimination. The point is that having both an articulator and a place/​location specification, while required in the actual production of speech, introduces a significant amount of redundancy, due to the fact that many theoretical combinations are either anatomically impossible or so hard that they do not occur, such as a coronal pharyngeal (impossible), or a labial alveolar (difficult). In practice, then, phonological models of speech segments reduce the structure in (15) to that in (16).25 (16) Manner

Place/Articulator

For sign language, however, we need the full structure in (15), because the articulator can have many different shapes and moreover moves relatively easily with respect to all places.26 Indeed, the distinction between actor and location can be found in all sign models. Returning to the notion of relative orientation, we can now identify this specification as providing a subdivision of the hand into at least six sub-​articulators, being the six sides of the hand that may account for minimal pairs in various sign languages (17).27 (17)

Sub-​articulators of the hand: •  Palm •  Back •  Tip •  Wrist side •  Ulnar side •  Radial side

14

15

Phonological structure of signs

Adopting the structure in (15) as the structure of signs, we have provided a natural place for features that specify the shape of non-​interpolation movement. Since ‘manner’ specifies the relation between the articulator and the place, this node could just as well be labeled ‘movement’, but we will maintain the label ‘Manner’; see (21). In conclusion, from the analogy that has been developed in this section, the structure of (monomorphemic) signs as a whole is that of a single segment (or single ‘phoneme’). This means that the various constraints that we have discussed (see (3a–​c), (6), (10), (12)) are constraints on segments. These constraints then, indirectly, bear on morphemes if we adopt a morpheme structure constraint which says that morphemes consist of a single segment; we will turn to the notion of ‘syllable’ in the next section. While Stokoe’s original claim that sign language differs from spoken language in lacking sequential structure was based on the simultaneity of the units of handshape, movement, and location, this would now appear to be a non-​starter, since the corresponding class nodes in spoken segments are also simultaneous. The real striking difference between the two modalities is rather that ‘monomorphemic’ signs (i.e., morphemes) lack sequential segmental structure in being monosegmental, as opposed to most morphemes in spoken languages, which consist of multiple segments. This conclusion stands in apparent contrast to proposals that we will discuss in the next section which argue for a sequential skeletal structure for signs. In our own proposal in (21), we will incorporate a ‘skeletal structure’, albeit of a reduced form. The question is now how it is possible that there are no multisegmental monomorphemic signs. Surely, such a state of affairs is unheard of for any spoken language.28 The obvious answer is that there are many more possible sign segments than there are possible spoken language phonemes, which results from the four facts listed in (18), the first two of which were already mentioned (cf. Meier (2000) for discussion of a similar point): (18)

a. In signs, as shown, articulator and place are independent parts that can cross-​classify fairly freely. b. The hand can take many different shapes. c. In signs, the articulator has six different sub-​articulators. d. In signs, the manner can take many different forms, due to the magnitude of the movement.

With respect to (18d), we suggest that, broadly speaking, the manner features relate to three aspects of the sign: path shape, temporal alignment, and spatial alignment. In (19), we propose a tentative set of manner features: (19)

A set of potential manner features a.  Path shape: 1.  [arc] (taking ‘straight’ to be the default) 2. [circle] b. Temporal alignment properties 3. [repetition] 4. [alternation] 5.  [bidirectional]/​[reverse] 6. [contact] c. Spatial alignment properties; hand arrangement 7. above 8.  in front of 9.  interlocked/​inside/​interwoven 15

16

Harry van der Hulst & Els van der Kooij

We can conclude that the array of phonological distinctions for sign segments is many times larger than for speech segments, so much so that it is possible to represent perhaps all ‘necessary’ morphemes in terms of single segments. A question that relates to the issue discussed here is how many morphemes a (sign) language needs? There is no general answer to this question. The age of the language and properties of the surrounding culture play a role here. Yet, there is perhaps one reason for believing that sign languages can function adequately with fewer morphemes: due to the influence of gestural elements (also iconically-​based), morphemes can be modified for use in a given context in many ways. In this sense, sign morphemes are perhaps comparable to consonantal roots in Semitic languages, which represent a ‘semantic field’ rather than a specific individual word meaning.

1.4  What about syllable structure? Limiting ourselves to alleged monomorphemic signs, it would seem to follow from the monosegmental hypothesis that signs cannot have syllable structure. After all, syllable structure in spoken language presupposes combinations of segments that occur in a specific linear order. Moreover, in spoken languages, syllabic organization is based on an alternation between two very different kinds of segments, namely consonants and vowels, which, in sequence, form a mandibular open/​close rhythm (which perhaps ultimately underlies the production and recognition of syllabic units). As Sandler (2008) points out, there is no mandibular cycle in sign, or anything remotely like it. There are of course signs that have a linear sequence of segments, but such segment sequences result from concatenative morphological operations, which, except for compounding, are not typical in sign languages. Sequences of segments also occur in fingerspelled words. When compound structures become opaque, and effectively simplex morphemes (Liddell & Johnson 1986), or when fingerspelled words are compressed and become, as such, conventionalized morphemes (Wager 2012), morphemes that consist of a sequence of segments can arise. Putting those cases aside for the moment, our claim that monomorphemic signs have no syllable structure, and thus no sequential phonotactic organization, simply because there is no segment sequencing, is seemingly at odds with many other claims in the literature. After Stokoe’s groundbreaking work, which stressed the simultaneity of the units that constitute a sign, later researchers (e.g., Supalla & Newport 1978; Newkirk 1981; Liddell 1984; Liddell & Johnson 1989) argued that it was necessary to be able to make reference to the beginning and end point of (the movement of) signs, for example for inflectional purposes (I-​G IVE-​YOU as opposed to YOU-​G IVE-​M E ), or to express assimilations involving a switch in the beginning and end point of the movement (see Sandler (1989) and van der Hulst (1993) for a discussion of the arguments). Without formally recognizing the beginning and end point in the linguistic representation, it would be impossible to formulate morphological or phonological rules that refer to these entities. These considerations led to the adoption of structural timing units, forming a skeleton to which the content units (class nodes, features) of the sign associate in an autosegmental fashion (as explicitly proposed in Sandler (1986)). Most researchers (Johnson & Liddell 1984; Liddell & Johnson 1989; Sandler 1989, 1993; Perlmutter 1992; Corina 1996; Brentari 1998, 1999) proposed a skeleton that not only represented the initial location and final location, but also an intermediary movement unit M, as shown in (20). (20)

L

M

L 16

17

Phonological structure of signs

Several researchers then supported the M unit by assigning a central perceptual status to this unit (see Perlmutter (1992), Corina & Sandler (1993), Sandler (1993), and Brentari (1998) for relevant discussions). Given (20), it was almost ‘inevitable’ to draw an analogy between the LML sequence and a CVC-​syllable in spoken language (see Chinchor 1978; Coulter 1982). Pursuing these earlier ideas, Perlmutter (1992) explicitly compares the M to the vowel (or nucleus) in spoken language and even adds a ‘moraic’ layer to the representation. As we illustrated in (11), in the model that we have formulated, the presence of a path movement follows from a sign having two setting specifications. Since we have located the ‘movement’ features within the segmental content of the sign, i.e., in the manner class node, we maintain the claim, made in van der Hulst (1993), that the ‘M’ unit does not belong in the skeleton. However, in the representation of path movements, the setting points must be linearized even for monomorphemic signs, because a movement can go from [high] to [low] or vice versa. We therefore do need a skeleton ‘tier’ that is separate from the setting features. Moreover, linearization is not only necessary for setting features, but also for the features that specify local movements, i.e., orientation changes and aperture changes. Van der Hulst (1993) thus proposes a sign skeleton with two positions (‘x’), to which the various features that need to be linearized are associated, as in (21). (21)

Wilbur (1985) proposes a model that concurs with the dependency model by establishing linearization of features on different tiers (such as the location tier, orientation tier, etc.), each of which forms a syllable, and in which the different syllables are unified under one node. This dependency model differs in various ways from Sandler’s hand-​tier model (Sandler 1987) and Brentari’s prosodic model (Brentari 1998). The former adopts the M unit and associates feature content units to the position in the LML skeleton. The latter accepts the bipositional skeleton, but separates all static features and all dynamic features (called prosodic) under separate nodes in the structure. In (22), we provide representations of these two models. 17

18

Harry van der Hulst & Els van der Kooij

(22)

a. Sandler’s hand-​tier model (Sandler 1987; also see Sandler & Lillo-​Martin 2006: 132–​181) [feature]

[feature]

setting location L

M

L

handshape selected fingers joint position [feature]

[feature]

b. Brentari’s prosodic model (Brentari 1998: 26) Root

Inherent features

Prosodic features

Setting

Manual

Weak hand

Strong hand

Location

Path

Selected fingers

Joint

Fingers

Orientation

Handshape change

Sign researchers (e.g., Padden & Perlmutter 1987) have observed that when a sign has several dynamic properties, such as a path movement and an orientation or aperture change, the beginning and end point of each movement component are synchronized. This is precisely what the model in (21) predicts. The skeletal positions can also play a morphological role and function as the units to which inflection features associate (see Sandler (1987) for other processes that affect the skeleton). We thus come to the apparently ‘odd’ conclusion that whereas signs lack ‘suprasegmental’ sequencing, they do possess intrasegmental sequencing. If sequencing is the

18

19

Phonological structure of signs

hallmark of syllable structure, we could also say that in sign structure, the syllable is inside the segment, rather than being superimposed on it. We think we can perhaps explain this difference with reference to a crucial difference between auditory and visual perception. Visual perception of signs, even if these have dynamic and apparently sequential properties, is more ‘instantaneous’ than the perception of auditory speech input, which is necessarily stretched out in time (see Brentari (2002) and Emmorey (2002)). The latter reaches the ear bit by bit (albeit as a continuous stream), and it seems plausible to assume that the auditory system first and foremost imposes a linear organization on speech percepts, perhaps based on the rhythmicity that results from the motoric actions of the articulatory system, before it partitions the sequential chunks into simultaneous smaller properties (i.e., features). In contrast, visual percepts capture the entire signs as a whole and would first of all deliver a partitioning into simultaneous layers which, then, takes precedence over the linear sequencing of the dynamic aspects at (some of) these layers. Adopting a terminology proposed in Goldsmith (1976), we can represent the difference in terms of the notions of vertical and horizontal slicing of the signal. When comparing sign and speech, these occur in different dependency relations, as illustrated in (23). (23)

SPEECH SIGNING vertical (syntagmatic) horizontal (paradigmatic) | | horizontal (paradigmatic) vertical (syntagmatic)

Thus, an incoming speech signal is first spliced into vertical slices, which gives rise to a linear sequence of segments. Horizontal slicing then partitions segments into co-​temporal feature classes and features. In the perception of sign language, however, the horizontal slicing takes precedence, which gives rise to the simultaneous class nodes that we call Handshape, Movement, and Place. Then, a subsequent vertical slicing of each of these can give rise to a linear organization that is captured by branching nodes (for Aperture and Orientation and Setting) and a bipositional skeleton.29 The position outlined in this section does not deny the importance of the notion of syllable and thus does not invalidate arguments in favor of this unit, even when these have been formulated with respect to models that recognize an M-​unit (Sandler, Perlmutter) or a prosodic node in the feature structure (Brentari). The arguments that Brentari and Fenlon (Chapter 4) provide for syllable structure, in our view, carry over to the model in (21). In acquisition, the emergence of the bipositional skeleton introduces the phase of syllabic babbling. Degrees of ‘sonority’ and of ‘weight’ can also be formulated. The bipositional skeleton with its associated features provides a basis for distinctions between types of syllables, depending on how many feature pairs are associated with the two positions.

1.5  Rules 1.5.1  Grammatical phonology and utterance phonology For spoken languages, various proposals have been made to differentiate the class of phonological rules into different types (see van der Hulst (2016) for a general review). We believe that the major division in types of phonological rules is that between rules that belong to the grammar that delivers linguistic expressions (accounting for allomorphy 19

20

Harry van der Hulst & Els van der Kooij

at the word level and for form variation in words that depends on syntactic context) and rules that are part of the production system that delivers actual utterances (applying across the board with reference to phonological representations of any size and thus including both word and phrasal phonology); these latter rules belong to phonetic implementation. We will refer to the two subsystems as grammatical phonology and utterance phonology, respectively.30 With this division, all rules that deliver fully predictable properties would belong to the utterance phonology, so that the grammatical phonology only deals with allomorphic alternations due to rules that range from highly restricted (in terms of lexical diacritics or morphological context) to rules that apply fairly regularly (but not ‘automatically’). Many regular grammatical rules (such as vowel harmony rules) can be seen as repairs to cases in which a phonotactic constraint is violated in a morphologically complex word. Others are fully idiosyncratic with respect to certain morphological constructions; here one can think of rules that deal with alternations, such as in electric ~ electricity, where /​k/​alternates with /​s/​in a morphologically restricted context. In this view, the grammatical phonology is seen as ‘digital’, ‘categorical’, or ‘discrete’, while the utterance phonology is ‘analog’ and ‘continuous’. In our view, all phonological processes that are automatic belong to the utterance phonology, including processes that are called allophonic rules. Indeed, allophonic rules (despite their formulation in textbooks or exercises) are actually often subject to speech rate and stylistic factors. For example, aspiration in English varies depending, among other things, on the strength of the following stressed vowel. The literature on phonological processes in sign language is rich, but in our view, all reported processes belong to the utterance phonology, that is, the phonetic implementation system. We believe that the evidence for grammatical phonological rules in sign languages is very limited, if not absent. Utterance rules account for the most natural realization of phonological units, given their simultaneous and sequential context in actual utterances. Sign languages thus display co-​articulatory or assimilation processes, but, unlike in the case of spoken languages, there is little evidence for such processes to have developed into language-​specific allomorphic rules that obligatorily apply in specific contexts (and that are no longer fully automatic). Co-​articulation within morphologically complex forms occurs in cases of sequential morphology (specifically compounding), but also in non-​concatenative morphology, when, for example, the movement of a sign is modified to create a derived word. Co-​articulation within monomorphemic signs affects the way in which the (simultaneous) major units are realized. A  given handshape, say fist, will have slightly different positions for the thumb depending on which part of the hand makes contact with the place (Cheek 2001; van der Kooij 2002; Mauk, Lindblom, & Meier 2008). Another example of a process that occurs in actual signing concerns two-​ handed signs. Frequently (under conditions that are reported in several studies), a two-​ handed sign can be articulated with one hand, typically the strong hand (‘weak drop’; see Padden & Perlmutter (1987) for ASL, van der Kooij (2001) for NGT). It can also happen that the weak hand after being part of a two-​handed sign stays in place while a following sign is articulated (‘weak freeze’ or ‘weak hand hold’, see Kimmelman et  al. (2016)). The case of compounding deserves special attention. Liddell & Johnson (1986) and Sandler (1987) have described various processes that affect compounds, such as deletion of initial or final ‘holds’, spreading of handshape, etc. These processes, however, have the characteristics of utterance rules and never have acquired the status of obligatory

20

21

Phonological structure of signs

rules that automatically must apply when a compound is formed. That said, the effect of such assimilations may lexicalize, i.e., grammaticalize into the lexical representation of compounds, which is why there is nothing to gain by accounting for the implied alternation between the free form and the form as it occurs in a compound in terms of a grammatical allomorphic rule. In short, most if not all processes that sign phonologists have described seem to fall in the realm of utterance phonology and, more specifically, phonetic implementation; some examples are provided in (24). (24)

Examples of implementation processes a. Default implementations (default rules) e.g., no specification for location > [neutral space], Selected Fingers [one] > index finger b. Adaptations of handshapes and orientations to specific places and vice versa e.g., thumb position in S ( ) vs. AS ( ) depending on relative orientation ([radial] or [palm], respectively) c. Assimilation processes between units of signs in sequences (compounds/​ phrasal collocations) or weak freeze d. Reduction/​deletion processes such as weak drop, or ‘weak hand lowering’ e. Epenthesis of non-​lexical movements (Geraci 2009) f. Other phrasal processes, e.g., enhancement of movement for sentence position or focus (Crasborn & van der Kooij 2013)

An important point regarding implementation processes is that while, on the one hand, these processes can be grounded in articulatory or perceptual factors (‘phonetics’), it is also possible that such processes are motivated by semantic factors, which lead the signer to add (sometimes called gestural) elements to signing that would appear to be iconically motivated. As reviewed in Goldin-​Meadow & Brentari (2017), the co-​occurrence of lexical phonological properties and gesture properties is particularly salient in sign languages. This means that the goal of isolating contrastive properties of signs involves abstraction not only from universal effects that are due to the physical and psychological state of the signer, idiolectal properties resulting from gender, age etc., co-​articulation and stylistic factors, but also from gestural aspects. As they point out, this presupposes the possibility of identifying aspects that are due to gesture, which is not an easy task, assuming that indeed we are dealing with two separate systems. The authors also point out that, as for the three major units of signs, handshape seems to be the most categorical, i.e., least subject to gestural mutation, while location and movement would appear to be different in this regard.

1.5.2  Why do sign languages lack a grammatical phonology? We now need to address the question as to why sign languages appear to lack grammatical phonological rules. But first, we take a look at some cases that have been claimed to be precisely of this type. One reviewer mentions the following two examples that could be considered as cases of allomorphy:

21

22

Harry van der Hulst & Els van der Kooij

(i)

Verb agreement realization in ASL & ISL (Meir 2002):  agreement realization in ASL can be expressed by path features only (e.g., HELP ), by orientation and path movement (e.g., ADVIS E ), or by orientation only (e.g., SAY-​Y ES-​T O, SAY-​N O-​T O ). (ii) Reciprocals in German Sign Language (DGS) (Pfau & Steinbach 2003): one-​and two-​handed signs have different forms for expressing the reciprocal. One-​handed stems have a simultaneously realized reciprocal form, while balanced, two-​handed stems have a sequential form for the reciprocal. We would like to argue that the alternations in the cases in (i) can be treated as suppletive, even though the choice as such is phonologically motivated, that is, dependent on the phonological form of the base verb.31 The case in (ii) is in our view implementational, since the expression of reciprocity simply must be sequential in two-​handed signs. But our point is not that sign language could not have grammatical phonology. Even if an occasional case can be made, the question still remains why there is so little of it. To gain insight in this observation, we first note that it has frequently been argued that the various types of phonological rules in spoken languages reflect stages in the ‘life cycle’ of phonological processes, the idea being that phonetic effects of universal phonetic implementation processes can be exaggerated (for whatever reason) and give rise to language-​specific implementation rules, which (perhaps first being optional and then becoming obligatory), with further exaggeration, then fall within the scope of feature-​specifications, so that they develop into what are traditionally called allophonic rules. Allophonic rules (perhaps only after becoming neutralizing) eventually become detached from their phonetic roots and give birth to non-​automatic rules that account for allomorphy, which then may transition from general rules to rules that have a few or even many exceptions. This leads to a point where the rules can be analyzed as part of morphological operations. The question is thus why, in sign language phonology, processes that start out as phonetic implementation processes fail to transition into allomorphy rules. We attribute this ‘failure’ to what we believe to be a crucial difference between phonology in spoken and sign languages. In sign languages, the phonological form of morphemes and words that are stored in the lexicon is foremost a grammaticalization of the iconic motivation of signs, rather than of their phonetics. In contrast, phonological form, both static and dynamic, in spoken languages emerges from the grammaticalization of phonetic, implementational preferences and processes. While we do not deny that such processes and preferences are visible (!) in the phonological form of sign languages, we believe that semantic/​iconic factors play an overriding role in the emergence of the phonological form of signs. This being so, sign languages resist rules that ‘blindly’ alter phonological properties of morphemes, that is, rules that are blind to the semantic value of these properties. The fact that many form elements in signs are meaning-​bearing explains, in our view, the resistance in sign languages to develop allomorphy rules. These rules would, after all, render the semantic ‘compositionality’ of signs opaque by removing or mutating the meaning-​bearing form elements. While, as mentioned, bits and pieces of the members of compounds may be deleted or assimilated, this perhaps is less damaging because, on the whole, the semantics of compounds are inherently less predictable (in all languages). The fact that such rules do not become general rules of the language is here claimed to be due to the semantic damage that they will do; this point is also made in Sandler (2017). We conclude that the lack of grammatical phonology is both due to and compensated for by the semantic value of many, perhaps most phonological units. We elaborate this 22

23

Phonological structure of signs

point in the next section, with reference to earlier work that paved the way for this viewpoint, which calls for a re-​evaluation of what counts as a morphologically complex sign.32

1.6  Iconicity 1.6.1  Discrete iconicity and gradual iconicity In early work on sign phonology, the obvious iconic motivation of many signs was downplayed or ignored, because it was important at the time to make the point that sign language, just like spoken language, has autonomous ‘meaningless’ structure in the form and thus duality of patterning (or double articulation), the existence of which was (implicitly) taken to be a defining characteristic of human natural languages (Martinet 1955; Hockett 1960). Early on, sign phonologists took exception to the ‘denial’ of iconicity; see Friedman (1977), Mandel (1977), and Boyes-​Braem (1981). The question is now whether acceptance of iconically motivated properties of signs undermines the claim that sign languages have phonological structure, i.e., a ‘dual articulation’. In order to understand how iconicity contributes to the form of signs, we must distinguish between discrete iconicity and gradual iconicity. Discrete iconicity occurs in two types: recurrent and incidental. The iconic motivation of a specific form element is recurrent in the lexicon when a genuine phonological element is used with a semantic value in multiple signs. An example is the location on the side of the forehead, which is frequently associated with the meaning ‘mental state or process’. This location is used in ASL signs such as THINK, U N D ERS TAN D, TEACH , D REAM , etc. A similar point about phonological units being meaning-​bearing is made in Lepic et al. (2016) in a study about two-​handed signs. The authors show that the fact of being two-​handed corresponds to a small list of semantic properties well above chance, similar to the properties found for NGT two-​ handed signs (van der Kooij 2002).33 In case a form element is iconically motivated but occurs only in one sign, we call it incidental; an example would be the location near the kidney for the NGT sign K ID N EY . With the term ‘gradual iconicity’ we capture the fact that the specific articulation of a sign aspect is determined by the referential context by showing ‘form copying’ (similar to what Goldin-​Meadow & Brentari (2017) describe as co-​sign gesture). For example, the hand configuration feature [open] refers to the aperture between the thumb and the selected fingers. The exact degree of opening may be partly determined (aside from articulatory factors) by the size of the referent (e.g., a book or a piece of paper). We will refer to K ID N EY cases and gradual iconicity as reference copying. In the next section, we propose that the three-​way distinction between discrete recurrent iconicity, discrete incidental iconicity, and gradual iconicity is reflected in three different mechanisms in the phonology. We start with gradual iconicity, which is perhaps the least controversial in its treatment.

1.6.2  Gradual iconicity Adaptation of the form of signs to specific sign acts can, in principle, not be recorded in the lexicon, unless one adopts an extreme form of exemplar-​based phonology, according to which every instance of producing or perceiving a sign is added to the lexicon. While it is no doubt possible for signers to mentally record specific sign instances, we believe that this cannot replace the need for an abstract lexical representation that ties all the 23

24

Harry van der Hulst & Els van der Kooij

instances of use together, abstracting away from what differentiates them. We therefore take it to be uncontroversial that gradual iconicity arises in phonetic implementation. This implies that phonetic implementation itself can be dependent on iconicity. Signs can be adapted in use to formally represent properties of a (mental image of) a thing or action that is being referred to at a specific time and place of signing. This makes such gradual iconicity pragmatic in a sense. Gradual iconicity can also be found in spoken language, when speakers adapt the pronunciation of a word, for instance, when saying that the benefit of a tax bill for the middle class will be huuuuuge (see Okrent 2002).

1.6.3  Incidental discrete iconicity Incidental iconic properties are an obligatory part of the lexical representation of a sign, i.e., these properties are conventionalized. However, we do not believe that such properties, while discrete, function as phonological units that can be used contrastively other than being meaning-​bearing themselves. Adding the location for KIDNEY to a list of contrastive locations would be a mistake. Van der Kooij (2002) and van der Hulst & van der Kooij (2006) have proposed that such properties are designated as being pre-​ specified phonetic properties, which bleed what would otherwise be a default sublocation within a major location, assuming that such default sublocations need not be encoded in the lexical representation. We take detailed properties to arise in phonetic implementation. As an example, to illustrate the difference between phonetic pre-​specification and default specification, we can compare the NGT signs SOLDIER and ALSO , which share the major location feature [chest]. In the latter sign, the thumb side of the hand touches the upper chest contralaterally, which we take to be the default sublocation because it is more ‘natural’ (i.e., easier) to touch the chest contralaterally. The former sign places the hand ipsilaterally, representing holding a riffle. The ipsilateral location, which does not follow the default rule, has to be pre-​specified in the lexical representation of the sign SOLDIER . If all instances of phonetically pre-​specified properties were accepted as distinctive features, the list of features would become very long, perhaps infinitely long. Moreover, when iconicity gets somehow lost, default sublocations may appear (e.g., the cheek as default sublocation of [side of head] when the motivation of the specific sublocations ear (from SL EEP ) and mouth (from EAT ) are bleached in the reduced ASL compound HOM E ). We here add the suggestion that iconically motivated phonetic properties, perhaps because they need to be pre-​specified in the lexicon, can give rise to phonological elements that carry meaning if their use, by a form of analogical extension, becomes recurrent.

1.6.4  Recurrent discrete iconicity In sign languages, new signs are overwhelmingly iconic (van der Kooij & Zwitserlood 2016, submitted): motivated form is projected from meaning (within the limits of what is anatomically feasible and ‘painless’), and using form elements that can also occur as ‘meaningless’ form units, which means that they belong to the arsenal of phonological building blocks of the language. While recurrent iconic form elements can occur in the existing (sometimes called frozen) vocabulary (see Johnston & Schembri 1999; Brentari & Padden 2001), we would like to suggest that form elements are only recognized as being potentially meaning-​bearing if they occur productively in newly created words and/​or in morphological constructions. Thus, so-​called classifiers (which are formal parts in that 24

25

Phonological structure of signs

they are ‘handshapes’) qualify as such units (see Tang et al. (Chapter 7) and Zwitserlood (Chapter 8) for discussion of classifiers). However, we repeat that a condition on recognizing phonological units that can occur as meaning-​bearing is that they can also occur as pure form units, i.e., non-​meaning bearing. This is paralleled in spoken languages by cases in which a feature such as [nasal] functions on its own, as a morpheme (although without being iconic), while at the same time being used in many other circumstances as a distinctive form element.34 The question is now how we formally account for the grammatical mechanism that introduces meaning-​bearing form elements in signs. A  traditional understanding of morphological structure is that it involves an exhaustive partitioning of words into minimal meaning-​bearing forms (called morphemes), such that the semantic value of the whole structure is the compositional result of the meaning of the parts and their mode of combination. However, this requirement may be too strict. We suggest here that meaning-​bearing form elements (whether iconically motivated or not) can be regarded as morphemes, even if the sign form cannot be exhaustively parsed into meaning-​bearing units.35 If we then argue that the meaning-​bearing form elements can also be treated as morphemes in frozen lexical items, most signs that are traditionally thought of as monomorphemic must be analyzed as morphologically complex (Zwitserlood 2001, 2003a,b, 2008).36 Perhaps, typically, the reason that such frozen forms must be listed is that their meaning as a whole cannot be fully predicted from the parts, especially if some parts are meaningless. We need to acknowledge that since the meaning of such signs is in part unpredictable like in compounds, they need to be listed as such in the lexicon. An alternative to the ‘radical’ morphological approach just sketched can be extracted from the proposals made in Boyes-​Braem (1981), which are further developed in Taub (2001). Given that new formations display a systematic use of iconicity, Boyes-​Braem (1981: 42ff.) proposes a derivational system in which properties of a mental image (which she calls a visual metaphor) that can be associated to a semantic concept are encoded by one of the independently needed phonological elements of the language (which she calls morpho-​phonemic features, i.e., phonological features that can play a ‘morphological’ role). While this is an interesting proposal, it remains to be seen whether it ultimately differs from our radical morphological approach, but we refer to van der Kooij & Zwitserlood (submitted) for further discussion of these issues. It could perhaps be argued that the approaches advocated in this section imply the elimination of sign phonology, if meaning is assigned to building blocks that were formerly held to be meaningless. However, we think that it is more correct to speak of a conflation of phonology and morphology in the sense that the decomposition of signs intermingles phonological and morphological structure, rather than the former strictly preceding the latter.37

1.7  Concluding remarks In this chapter, we have reviewed the emergence of sign phonology as a field of study beginning with the work of William Stokoe. We discussed further developments and presented our own model for a phonological organization of signs, including class nodes, features, and a bipositional template, all subject to phonotactic constraints. Other influential models were also discussed. While sign languages have traditionally been claimed to share most of their constraints, typological work, of which we need more, tentatively suggests that differences do occur, albeit in a much more modest way when compared to 25

26

Harry van der Hulst & Els van der Kooij

spoken languages. Turning to the dynamic side of phonology, and bearing in mind the distinction between grammatical and utterance phonology, we suggested that sign languages appear to lack grammatical phonological rules, i.e., allomorphy rules. We discussed the proposal that the monomorphemic structure of signs is monosegmental, and this led us to re-​evaluate claims about syllable structure in signs. We then considered the role of iconicity, drawing attention to the fact that features and class nodes are not always meaningless, a view that has implications for the traditional ‘phonological analysis’ of signs, leading to the idea that phonological and morphological structure are intermingled.

Acknowledgments We are grateful to Shengyun Gu, Wendy Sandler, and the editors of this volume for careful reading of and comments on previous versions of this chapter.

Notes 1 We refer to several reviews of the sign phonology field which focus on reviewing different theoretical models: Corina & Sandler (1993) and van der Hulst (1993) for earlier models; see also Sandler (2012, 2017), Brentari (1995, 2011, 2012), and Fenlon, Cormier, & Brentari (2018). 2 In Section 1.6.4, we will also address the question whether such meaning-​bearing units should be identified as morphemes. 3 In Section 1.4, we will discuss different views on the status, or place in the grammar, of such rules. 4 For a history of the sign phonology field, see van der Hulst (to appear). Of particular interest is the fact that alongside Stokoe (1960), West (1960) develops a very similar proposal concerning the duality of signs, based on his study of the Plain Indian Sign Language. 5 With reference to signs, ‘context-​free’ means that the constraint does not make reference to other major units, but instead only makes reference to the co-​occurrence of features within each major unit. 6 An extensive comparative study of the phonology of 15 languages, based on a data base of 15,000 signs (1,000 for each language), is currently carried out in the SignTyp project, led by Harry van der Hulst and Rachel Channon; NSF grant BCS-​1049510. 7 Here and in (2), we use ‘technical terminology’ that will be explained in following subsections, after which the reader might want to go back to re-​inspect (1) and (2). 8 In Section 1.6, we discuss the notion ‘monomorphemic’ that is used here, arguing that many alleged monomorphemic signs may have to be analyzed in terms of containing at least one (possibly more) phonological building block that is meaningful. 9 This is a simplification because when the selected finger index and thumb make contact, the unselected fingers are extended; see Sandler (1989) and van der Kooij (2002) for a detailed discussion. 10 The ‘insult gesture’ (‘the finger’) occurs as a so-​called emblem, that is, a stand-​alone gesture that can be used (although should be avoided) by all members of certain cultures. It does not, as far as we know, occur in sign languages as part of regular signs; see Kendon (2004) for an account of gestures. 11 Such predictability is accounted for in allophonic rules, i.e., rules that account for non-​distinctive variants of handshape units. Stokoe (1960) and Friedman (1976), for example, distinguish between the /​A/​hand as a phonemic unit and various [A]‌-​variants. 12 Eccarius (2008), who uses the model of Optimality Theory, proposes that a distinction is needed between primary and secondary selected fingers. In our model, more complex structures are possible when licensed by the morphology. For instance, two sequential sets of selected fingers within one syllable are possible if they stem from letter signs (e.g., change from [all] to [one] in NGT #B LU E , to represent the first two letters of the Dutch word blauw (‘blue’)) 13 This notion of relative orientation was called ‘focus’ in Friedman (1976), among others. The term ‘leading edge’ can also be found.

26

27

Phonological structure of signs 14 Below in (21), we identify the relative orientations as sub-​articulator features, encoding different parts of the articulator, the hand. 15 This leads to the constraint that signs must have movement, which feeds into the idea that the movement unit is the nucleus of the sign syllable; see Section 4. 16 This example may be analyzed as morphologically complex, consisting of the sign H EA R and a negative suffix. 17 See van der Hulst & Sandler (1994) for an extensive comparison of the two views, which comes out on the side of acknowledging a movement unit. In Section 2.4, while claiming that movement is implied by having two setting specifications, we will identify a major unit with the label ‘manner’, which takes on the role that Sandler (1996a) attributes to a movement unit, which in her model is structurally quite different from the location and handshape units. We do not deny the need for movement features, but we do deny a skeletal position to accommodate them; see Section 1.2.4. 18 Below, we will distinguish between simple and complex path movement. 19 An alleged exception to the simultaneous articulation of the two movement components is for instance the outlining of the rectangular shape in the NGT sign P U R S E (which has a closing movement at the end of a path). Arguably, it is the outlining property that motivates this sequential articulation of the lateral path and the closing local movement. In current signing, however, we do observe simultaneous articulations of the two movement components in the sign P U R S E , thus overruling the outlining of a rectangular shape. 20 A potential alternative for characterizing other non-​interpolation paths, like zigzag movements, would be to analyze them as combinations of two path movements, one being repeated and perpendicular to the main path. This approach, while perhaps feasible, cannot capture other ways in which the path movement can have special properties. This is why we adopt a manner feature analysis of these properties of non-​interpolations path movements. 21 There are, however, more types of possible secondary movements, including rubbing, scissoring, and pumping (e.g., the thumb in repeatedly clicking a ballpoint), which cannot readily be analyzed in a similar fashion. For such cases, we again need ‘manner’ features. However, we also think that an account of the whole array of secondary movements will most likely include reference to iconically motivated movements that occur in a small set of signs and which do not warrant an extension of the set of features. 22 Spoken language articulation also has access to various articulators, notably the lips and the tongue (and various subparts of the tongue), which are often regarded as different articulators. In spoken language, articulators can be combined and the presence of each is contrastive, as in, for example, the complex consonant /​kp/​, which in some languages contrasts with both /​p/​ and  /​k/​. 23 Van der Hulst (1996), who calls these two types of two-​handed sign ‘balanced’ and ‘unbalanced’, respectively, argues that the weak hand deserves its own place in the structure in both types, which requires a branching structure of the Articulator node, with the weak hand being a dependent node. He further argues that the dependent status of the weak hand accounts for its underspecification. In balanced signs, the weak hand is fully underspecified (because it is identical to the strong hand), while the choice for unmarked handshapes in unbalanced signs likewise correlates with a very low degree of specification. 24 For example, in articulator-​based theories, there are usually four articulators (labial, coronal, dorsal, and radical), and to the extent that these articulators can reach more than one location, ‘dependent’ features are specified under the articulator features that refer to different locations (such as [±anterior]) or to shapes of the articulator (such as [±distributed], [±retroflex], and [±lateral] as dependents of [coronal]). Other phonologists will, while using articulator features such as [coronal], [dorsal] etc., regard these as choices under a node labelled ‘Place’. 25 The structure for spoken segments usually also includes a laryngeal node for phonation features for consonants, or tone features for vowels. 26 The sign structure in (15) captures an insight offered in Stokoe (1991), who compares the articulation of a sign to a subject-​predicate structure in which the hand (the ‘subject’) performs an action (‘the predicate’), which involves a movement toward or on a place. Stokoe suggests that this prototypical sign structure was a model for the emergence of the prototypical sentential syntactic structure: [subject/​actor [verb/​action [‘theme’]]].

27

28

Harry van der Hulst & Els van der Kooij 27 The sub-​articulators correspond to the separate articulators [labial], [coronal], [dorsal], and [radical] in models for segments in spoken language models. The sub-​articulator [coronal] can take different shapes, namely apical and laminal, retroflex and lateral. Such finer distinctions are not present in the sign geometry. 28 In spoken languages, vowels can constitute a morpheme by themselves, as well as consonants; the latter case is typically restricted to bound morphemes. However, given the limits on inventory sizes (with an average of 34) (Maddieson 1984), the number of segments in spoken languages is far below the average number of morphemes that sign languages have (although we are not aware of any estimates for this average number). 29 The picture for spoken language is slightly more complex, since, for example in a tone language, the tone layer is arguably horizontally sliced first. This results in the autosegmental nature of tone, as proposed in Goldsmith (1976). 30 Our distinction between grammatical and utterance phonology relates to ‘lexical rules’ and ‘post-​lexical rules’, following Kiparsky (1982). While utterance rules would certainly be post-​ lexical (and in fact, post-​syntactic), grammatical rules can be lexical rules (referring to ‘words’) as well as phrasal rules that are grammatically conditioned. 31 See Nevins (2011) for phonologically-​motivated suppletion in spoken languages. 32 There could be additional factors for why spoken languages are so characteristically rich in allomorphic variation. One such factor could be the stability in language transmission across generations, which impose allomorphy on the learner even though, as is well known, allomorphy is non-​optimal from the viewpoint of one-​to-​one form-​meaning relations. And indeed, learners are probably responsible for analogical levelling and extension, which eliminate such allomorphic variation or make it more general by extending it to more words. We speculate that learners of sign languages, perhaps being typically guided by a rather impoverished or more diverse input (because their primary learning environment, i.e., their caretakers, may not be native signers), have more freedom and opportunity to create optimal form-​meaning relations which militate again allomorphy. Finally, a factor that stabilizes allomorphy in spoken languages may be the presence of alphabetic writing systems. 33 We also refer to Meir et al. (2013) for a different kind of demonstration of the systematic role played by iconicity in the grammar of sign languages. 34 Perhaps a closer analogue would be cases in which a feature is used ideophonically in a spoken language, when words occur in two variants that differ in meaning. We refer to Dingemanse (2012) and Dingemanse et al. (2015) for examples and systematic discussion. 35 The case where parsing is non-​exhaustive, thus leaving a residue that is called a cranberry morpheme (after the unit ‘cran’) is considered to be exceptional in morphological analysis of spoken languages. 36 Perhaps the earliest proponent of this view is West (1960). 37 In his later work, Stokoe developed the notion of ‘semantic phonology’, which, while not unrelated to the issues discussed here, has a different motivation; see endnote 28. Stokoe suggested that the basic structure of signs, as discussed here in Section 3, is based on a semantic subject-​ predicate structure.

References Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring, MD: Linstok Press. Boyes-​Braem, Penny. 1981. Distinctive features of the handshape in American Sign Language. Berkeley, CA: University of California PhD dissertation. Brentari, Diane. 1995. Sign language phonology: ASL. In John A. Goldsmith (ed.), A handbook of phonological theory, 615–​639. New York: Blackwell. Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA: MIT Press. Brentari, Diane. 1999. Phonological constituents in signed languages. In Martin Heusser (ed.), Text and visuality: Word and image interactions, 237–​242. Basel: Weise. Brentari, Diane. 2002. Modality differences in sign language phonology and morphophonemics. In Richard P. Meier, David Quinto-​Pozos, & Kearsey Cormier (eds.), Modality and structure in signed and spoken languages, 35–​64. Cambridge: Cambridge University Press.

28

29

Phonological structure of signs Brentari, Diane. 2011. Handshape in sign language phonology. In Marc van Oostendorp, Colin J. Ewen, Keren Rice, & Elizabeth Hume (eds.), Companion to phonology, 195–​222. Oxford: Wiley-​Blackwell. Brentari, Diane. 2012. Phonology. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 21–​54. Berlin: De Gruyter Mouton. Brentari, Diane & Carol A. Padden. 2001. A language with multiple origins:  Native and foreign vocabulary in American Sign Language. In Diane Brentari (ed.), Foreign vocabulary in sign language: A cross-​linguistic investigation of word formation, 87–​119. Mahwah, NJ: Lawrence Erlbaum. Channon, Rachel. 2002. Signs are single segments:  Phonological representations and temporal sequencing in ASL and other sign languages. College Park, MD: University of Maryland PhD dissertation. Cheek, Adrienne D. 2001. The phonetics and phonology of handshape in American Sign Language. Austin, TX: University of Texas PhD dissertation. Chinchor, Nancy. 1978. The syllable in ASL: Simultaneous and sequential phonology. Providence, RI: Brown University. Clements, George N. 1985. The geometry of phonological features. Phonology 2. 225–​252. Corina, David P. 1996. ASL syllables and prosodic constraints. Lingua 98(1). 73–​102. Corina, David P. & Wendy Sandler. 1993. On the nature of phonological structure in sign language. Phonology 10(2). 165–​207. Coulter, Geoffrey R. 1982. On the nature of ASL as a monosyllabic language. Paper presented at the Annual Meeting of the Linguistic Society of America, San Diego, California. Crasborn, Onno, Richard Bank, Inge Zwitserlood, Els van der Kooij, Ellen Ormel, Johan Ros, Anique Schüller, Anne de Meijer, Merel van Zuilen, Yassine Ellen Nauta, Frouke van Winsum, & Max Vonk. 2019. NGT dataset in Global Signbank. Nijmegen: Radboud University, Centre for Language Studies. https://​signbank.science.ru.nl/​. Crasborn, Onno & Els van der Kooij. 1997. Relative orientation in sign language phonology. In Jane Coerts & Helen de Hoop (eds.), Linguistics in the Netherlands 1997, 37–​48. Amsterdam: John Benjamins. Crasborn, Onno & Els van der Kooij. 2013. The phonology of focus in Sign Language of the Netherlands. Journal of Linguistics 49(3). 515–​565. Dingemanse, Mark. 2012. Advances in the cross-​linguistic study of ideophones. Language and Linguistics Compass 6(10). 654–​672. Dingemanse, Mark, Damián E. Blasi, Gary Lupyan, Morten H. Christiansen, & Padraic Monaghan. 2015. Arbitrariness, iconicity, and systematicity in language. Trends in Cognitive Sciences 19(10). 603–​615. Eccarius, Petra. 2008. A constraint-​based account of handshape contrast in sign languages. West Lafayette, IN: Purdue University PhD dissertation. Eccarius, Petra & Diane Brentari. 2007. Symmetry and dominance: A cross-​linguistic study of signs and classifier constructions. Lingua 117(7). 1169–​1201. Emmorey, Karen. 2002. Language, cognition and the brain:  Insights from sign language research. Mahwah, NJ: Lawrence Erlbaum. Fenlon, Jordan, Kearsey Cormier, & Diane Brentari. 2018. The phonology of sign languages. In Stephen J. Hannahs & Anna R.K. Bosch (eds.), Routledge handbook of phonological theory, 453–​475. New York: Routledge. Fischer, Susan D. & Qunhu Gong. 2010. Variation in East Asian sign language structures. In Diane Brentari (ed.), Sign languages (Cambridge Language Surveys), 499–​518. Cambridge: Cambridge University Press. Friedman, Lynn A. 1976. Phonology of a soundless language:  Phonological structure of ASL. Berkeley, CA: University of California PhD dissertation. Friedman, Lynn A. 1977. Formational properties of American Sign Language. In Lynn A. Friedman (ed.), On the other hand:  New perspectives on American Sign Language, 13–​57. New York: Academic Press. Geraci, Carlo. 2009. Epenthesis in Italian Sign Language. Sign Language & Linguistics 12(1).  3–​51. Goldin-​Meadow, Susan & Diane Brentari. 2017. Gesture, sign, and language: The coming of age of sign language and gesture studies. Behavioral and Brain Sciences 40. 1–​82. Goldsmith, John A. 1976. Autosegmental phonology. Cambridge, MA: Massachusetts Institute of Technology PhD dissertation.

29

30

Harry van der Hulst & Els van der Kooij Hockett, Charles F. 1960. The origin of speech. Scientific American 203. 88–​111. Johnson, Robert E. & Scott K. Liddell. 1984. Structural diversity in the American Sign Language lexicon. Chicago Linguistic Society (CLS) 20(2). 173–​186. Johnston, Trevor & Adam Schembri. 1999. On defining lexeme in a signed language. Sign Language & Linguistics 2(2). 115–​185. Kendon, Adam. 2004. Gesture:  Visible action as utterance. Cambridge:  Cambridge University Press. Kimmelman, Vadim, Anna Sáfár, & Onno Crasborn. 2016. Towards a classification of weak hand holds. Open Linguistics 2. 211–​234. Kiparsky, Paul. 1982. From cyclic phonology to Lexical Phonology. In Harry van der Hulst & Norval Smith (eds.), The structure of phonological representations, vol. 1, 131–​175. Dordrecht: Foris. Klima, Edward S. & Ursula Bellugi (eds.). 1979. The signs of language. Cambridge, MA: Harvard University Press. Lepic, Ryan, Carl Börstell, Gal Belsitzman, & Wendy Sandler. 2016. Taking meaning in hand: Iconic motivations in two-​handed signs. Sign Language & Linguistics 19(1). 37–​81. Liddell, Scott K. 1984. T HINK and B E L IE VE : Sequentiality in American Sign Language. Language 60(2). 372–​399. Liddell, Scott K. & Robert E. Johnson. 1986. American Sign Language compound formation processes, lexicalization, and phonological remnants. Natural Language & Linguistic Theory 4(4). 445–​513. Liddell, Scott K. & Robert E. Johnson. 1989. American Sign Language: The phonological base. Sign Language Studies 64. 197–​277. Maddieson, Ian. 1984. Patterns of sounds. Cambridge: Cambridge University Press. Mandel, Mark A. 1977. Iconic devices in American Sign Language. In Lynn A. Friedman (ed.), On the other hand: New perspectives on American Sign Language, 57–​107. New York: Academic Press. Mandel, Mark A. 1981. Phonotactics and morphophonology in American Sign Language. Berkeley, CA: University of California PhD dissertation. Martinet, André. 1955. Économie des changements phonétiques. Bern: Francke. Mauk, Claude E., Björn Lindblom, & Richard P. Meier. 2008. Undershoot of ASL locations in fast signing. In Josep Quer (ed.), Signs of the time:  Selected papers from TISLR 2004, 3–​23. Hamburg: Signum. Meier, Richard P. 2000. Shared motoric factors in the acquisition of sign and speech. In Karen Emmorey & Harlan Lane (eds.), The signs of language revisited, 333–​356. Mahwah, NJ: Lawrence Erlbaum. Meir, Irit. 2002. A cross-​modality perspective on verb agreement. Natural Language & Linguistic Theory 20(2). 413–​450. Meir, Irit, Carol A. Padden, Mark Aronoff, & Wendy Sandler. 2013. Competing iconicities in the structure of languages. Cognitive Linguistics 24(2). 309–​343. Morgan, Hope E. & Rachel I. Mayberry. 2012. Complexity in two-​handed signs in Kenyan Sign Language: Evidence for sublexical structure in a young sign language. Sign Language & Linguistics 15(1). 147–​174. Napoli, Donna Jo & Jeff Wu. 2003. Morpheme structure constraints on two-​ handed signs in American Sign Language:  Notions of symmetry. Sign Language & Linguistics 6(2). 123–​205. Nevins, Andrew. 2011. Phonologically conditioned allomorph selection. In Marc van Oostendorp, Colin J. Ewen, Elizabeth V. Hume, & Keren Rice (eds.), The Blackwell companion to phonology, vol. 4, 2357–​2382. Oxford: Wiley-​Blackwell. Newkirk, Donald. 1981. On the temporal segmentation of movement in American Sign Language. Ms, Salk Institute of Biological Studies, La Jolla, California [Published in Sign Language & Linguistics 1998, 8(1–​2). 173–​211]. Nyst, Victoria. 2007. A descriptive analysis of Adamorobe Sign Language (Ghana). Amsteredam: University of Amsterdam PhD dissertation. Okrent, Arika. 2002. A modality-​free notion of gesture and how it can help us with the morpheme vs. gesture question in sign language linguistics (or at least give us some criteria to work with). In Richard P. Meier, Kearsy A. Cormier, & David G. Quinto-​Pozos (eds.), Modality and structure in signed and spoken languages, 175–​198. Cambridge: Cambridge University Press.

30

31

Phonological structure of signs Padden, Carol A. & David M. Perlmutter. 1987. American Sign Language and the architecture of phonological theory. Natural Language & Linguistic Theory 5(3). 335–​375. Peng, Fred C.C. 1974. Kinship signs in Japanese Sign Language. Sign Language Studies 5(1). 31–​47. Perlmutter, David M. 1992. Sonority and syllable structure in American Sign Language. Linguistic Inquiry 23(3). 407–​442. Pfau, Roland & Markus Steinbach. 2003. Optimal reciprocals in German Sign Language. Sign Language & Linguistics 6(1). 3–​42. Sandler, Wendy. 1986. The spreading hand autosegment of American Sign Language. Sign Language Studies 50(1). 1–​28. Sandler, Wendy. 1987. Assimilation and feature hierarchy in ASL. Chicago Linguistic Society (CLS) 23(2). 266–​278. Sandler, Wendy. 1989. Phonological representation of the sign: Linearity and nonlinearity in American Sign Language. Dordrecht: Foris. Sandler, Wendy. 1993. Hand in hand: The roles of the nondominant hand in sign language phonology. The Linguistic Review 10(4). 337–​390. Sandler, Wendy. 1996a. Phonological features and feature classes: The case for movements in sign language. Lingua 98. 197–​220. Sandler, Wendy. 1996b. Representing handshapes. In William H. Edmondson & Ronnie B. Wilbur (eds.), International Review of Sign Linguistics, 115–​158. Hillsdale, NJ: Lawrence Erlbaum. Sandler, Wendy. 1999. Cliticization and prosodic words in a sign language. In Tracy A. Hall & Ursula Kleinhenz (eds.), Studies on the phonological word, 223–​255. Amsterdam: John Benjamins. Sandler, Wendy. 2008. The syllable in sign language:  Considering the other natural modality. In Barbara L. Davis & Krisztina Zajdo (eds.), The syllable in speech production, 379–​408. New York: Taylor & Francis. Sandler, Wendy. 2011. Prosody and syntax in sign language. Transactions of the Philological Society 108(3). 298–​328. Sandler, Wendy. 2012. The phonological organization of sign languages. Language and Linguistics Compass 6(3). 162–​182. Sandler, Wendy. 2017. The challenge of sign language phonology. Annual Review of Linguistics 3,  43–​63. Sandler, Wendy & Diane Lillo-​Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press. Stokoe, William C. 1960. Sign language structure: An outline of the visual communication systems of the American deaf (Studies in Linguistics:  Occasional Paper 8). Buffalo, NY:  Dept. of Anthropology and Linguistics, University of Buffalo [Re-​issued 2005, Journal of Deaf Studies and Deaf Education 10(1). 3–​37]. Stokoe, William C. 1991. Semantic phonology. Sign Language Studies 71. 107–​114. Supalla, Ted & Elissa L. Newport. 1978. How many seats in a chair? The derivation of nouns and verbs in American Sign Language. In Patricia Siple (ed.), Understanding language through sign language research, 91–​132. New York: Academic Press. Taub, Sarah F. 2001. Language from the body: Iconicity and metaphor in American Sign Language. Cambridge: Cambridge University Press. Tsay, Jane & James Myers. 2009. The morphology and phonology of Taiwan Sign Language. In James H.-​Y. Tai & Jane Tsay (eds.), Taiwan Sign Language and beyond, 83–​130. Chia-​Yi, Taiwan: The Taiwan Institute for the Humanities, National Chung Cheng University. Uyechi, Linda. 1994. The geometry of visual phonology. Stanford:  Stanford University PhD dissertation. van der Hulst, Harry. 1993. Units in the analysis of signs. Phonology 10(2). 209–​241. van der Hulst, Harry. 1995. The composition of handshapes. Trondheim Working Papers 23. 1–​17. van der Hulst, Harry. 1996. On the other hand. Lingua 98. 121–​144. van der Hulst, Harry. 2016. Phonology. In Keith Allan (ed.), The Routledge handbook of linguistics, 83–​103. Abingdon & New York: Routledge. van der Hulst, Harry. forthcoming. The history of sign language phonology. In B. Elan Dresher & Harry van der Hulst (eds.), The Oxford history of phonology. Oxford: Oxford University Press. van der Hulst, Harry & Wendy Sandler. 1994. Phonological theories meet sign language: Two theories of the two hands. Toronto Working Papers in Linguistics 13. 43–​74.

31

32

Harry van der Hulst & Els van der Kooij van der Hulst, Harry & Els van der Kooij. 2006. Phonetic implementation and phonetic pre-​ specification in sign language phonology. In Louis Goldstein, Douglas H. Whalen, & Catherine T. Best (eds.), Papers in laboratory phonology 8, 265–​286. Berlin: Mouton de Gruyter. van der Kooij, Els. 2001. Weak drop in Sign Language of the Netherlands. In Valery L. Dively, Melanie Metzger, Sarah Taub, & Anne-​Marie Baer (eds.), Signed languages: Discoveries from international research, 27–​42. Washington, DC: Gallaudet University Press. van der Kooij, Els. 2002. Phonological categories in Sign Language of the Netherlands: Phonetic implementation and iconicity. Leiden: Leiden University PhD dissertation. van der Kooij, Els & Inge Zwitserlood. 2016. Word formation and simultaneous morphology in NGT. Poster presented at Gebarendag in Nederland, July 1, 2016. van der Kooij, Els & Inge Zwitserlood. submitted. Strategies for new word formation in NGT: a case for simultaneous morphology. Submitted to Sign Language & Linguistics. Wager, Deborah S. 2012. Fingerspelling in American Sign Language: A case study of styles and reduction. Salt Lake City, UT: University of Utah PhD dissertation. West, La Mont. 1960. The sign language analysis; vol. I  and II. University of Indiana PhD dissertation. Wilbur, Ronnie B. 1985. Toward a theory of “syllable” in signed languages:  Evidence from the numbers of Italian Sign Language. In William C. Stokoe & Virginia Volterra (eds.), SLR ’83: Proceedings of the 3rd International Symposium on Sign Language Research, 160–​174. Silver Spring, MD: Linstok Press. Zwitserlood, Inge. 2001. The complex structure of “simple” signs in NGT. In Erica Thrift, Erik Jan van der Torre, & Malte Zimmermann (eds.), Proceedings of ConSOLE 9, 232–​246. Leiden: SOLE. Zwitserlood, Inge. 2003a. Classifiying hand configurations in Nederlandse Gebarentaal. Utrecht: University of Utrecht PhD dissertation. Zwitserlood, Inge. 2003b. Word formation below and above little x: Evidence from Sign Language of the Netherlands. Nordlyd 31(2). 488–​502. Zwitserlood, Inge. 2008. Morphology below the level of the sign:  “Frozen” forms and classifier predicates. In Josep Quer (ed.), Signs of the time. Selected papers from TISLR 8, 251–​272. Hamburg: Signum.

32

33

2 PHONOLOGICAL COMPREHENSION Experimental perspectives Uta Benner 2.1  Introduction One of the crucial aspects in everyday communication is comprehension. Obviously, production and comprehension of languages take place effortlessly in native speakers and signers, but looking at the phenomenon from a scientific viewpoint, the process of language perception is not trivial. Many facets of the process are still unclear starting with the signal. Even though the language signal is variable and “there is a many-​to-​ many mapping between acoustic patterns and phonetic categories”, meaning is conveyed (Nusbaum & Margoliash 2009:  882). This phenomenon has been labeled as the ‘lack of invariance problem’ and has been doubted for sign languages (Grosvald & Corina 2012). Yet, many questions regarding this matter remain open, especially concerning the perception of a signal with many variables such as diverse signers, differences in the use of signing space, and variances in configurations. Different articulatory gestures may lead to the same perception  –​both in vision and acoustics. This phenomenon is also known in other areas of visual perception: objects can be perceived from different angles, configurations may change; however, the objects are still recognized as the same. Another challenge in this respect is the non-​linearity of the signal. Features of one signal part may influence another part of the signal, thus changing its whole appearance. Moreover, as Johnson (2003) points out, the perceptual representation is more than merely the input signal. In perception, not only the language signal itself may play a role, but also additional situational information, for example, seeing the speaker, background noise, and linguistic experience.

2.2  Perceptual sign language characteristics Ever since sign languages have been recognized as naturally evolved complex languages, many studies have investigated the principles of sign languages. Findings from sign language research have deeply changed how languages are understood in general (Mayberry 2007). Many commonalities have been discovered, most prominently the existence of hierarchical structure at all levels of linguistic description (further information on the linguistic structure of sign languages is provided in the respective chapters of this 33

34

Uta Benner

handbook). These findings have led to sign languages being viewed as a unique possibility to investigate language universals. What do languages have in common despite the use of different modalities, that is, different modes of signal transmission? But also, how diverse can languages be? Research on sign languages challenges common views and contributes to a more general understanding of language. At first glance, sign languages seem indeed unlike spoken languages because of, for instance, the use of different articulators resulting in a different physical signal. The different modality has effects on language production and perception, one example being the correlation between size of articulators and articulation rate (Bellugi & Fischer 1972; Boyes Braem 1995). Despite the slower production rate in sign languages, which is linked to the bigger size of the articulators, the proposition rate is similar across modalities (Bellugi & Fischer 1972). Furthermore, the limitation on the acceleration of language output is also similar. In general, speeding the signal up 2.5 or 3 times renders the output unintelligible (Fischer et al. 1999). The similarities in proposition rate despite the differences in articulation rate result from one of the main features of sign languages, namely the non-​linearity of the signal. Sign languages are thus less dependent upon serial linguistic distinctions (Emmorey 2007). One of the reasons for that lies in the visual modality, which allows for the simultaneous perception of information. However, as Emmorey (2002) points out, that does not necessarily mean that multiple different messages are produced concurrently. Sign languages being visual languages, phonological information is available early on in the perception process. This explains why static pictures of signs can still be interpreted even though one phonological feature (movement) is missing (Emmorey 2002). This is also closely related to the visual mode, because it is possible to visually perceive simultaneous information. As in sign languages, a lot of phonological information is provided very early on, and several phonological features can be perceived simultaneously, signs in American Sign Language (ASL) are recognized significantly faster than English words (Emmorey 2007). The available phonological information limits the possibilities during lexical retrieval (see Emmorey 2007). Experimentally, insights into early recognition processes can be gained, for instance, from gating tasks. In such a task, words or signs are presented repeatedly in parts with increasing duration after onset. After every presentation, participants are asked whether they recognize the presented word or sign (Grosjean 1980). Gating tasks reveal a two-​stage recognition in sign languages: the phonological features place of articulation and hand configuration are recognized first; the phonological feature movement then leads to lexical recognition (Grosjean 1981; Emmorey & Corina 1990; see also Emmorey 2007). Emmorey (2007) explains that movement is the most temporally influenced phonological property of a sign, and that more time is required in the resolution of this property. Nevertheless, quite some phonological information is provided early on. As a consequence, the available phonological information does not only limit the set of possible lexical items, but the phonotactic structure of sign language also reduces the likelihood of being led down a garden path, as only few signs share all the initial phonological features (Emmorey 2007). Further information on lexical retrieval is provided in Gutiérrez & Baus, Chapter 3.

2.3  Categorical perception There has been evidence that phonetic knowledge shapes perception, for instance in the case of the perceptual effect called ‘Categorical Perception’. According to this effect, when

34

35

Phonological comprehension

a signal is changed continually, the perception nonetheless changes abruptly (Liberman 1957; Liberman et  al. 1967). That is, during perception, people do not perceive small variations of one phoneme but rather perceive categories. The concept of Categorical Perception has been criticized as a misinterpretation (Massaro 1987). Nevertheless, a substantial body of research has investigated Categorical Perception revealing this phenomenon not only in speech perception (for an early overview, see Harnad (1987)) but other perceptual domains as well, for example, emotional facial expressions (Etcoff & Magee 1992; Calder et al. 1996). Therefore, the question arises whether Categorical Perception can also be found in sign language perception. The assessment of Categorical Perception is usually done by using identification and discrimination tasks. During an identification task, participants are asked to categorize continuous stimuli in order to discover category boundaries. A  discrimination task is then used in order to assess whether there is better discrimination across category boundaries than within categories (see, e.g., Sehyr & Cormier 2015). Categorical Perception was investigated for place of articulation as well as for hand configuration but was only found for phonologically distinct hand configurations and only for deaf signers (Emmorey et al. 2003; cf. Emmorey 2007). It may well be that the phonological feature place of articulation has more variable category boundaries since locations on the body may be more variable or continuous (Emmorey et al. 2003). The authors offer further possible explanations for their findings:  linguistic categories may have no impact on spatial discrimination abilities, and the visual system may be better in discriminating spatial locations since it is used to them (Emmorey et al. 2003). One of these phonological rules, for instance, allows displacement of signs or sign lowering, which means that in casual register, the articulation of signs may be lowered (Emmorey et al. 2003; Tyrone & Mauk 2010). Furthermore, Categorical Perception effects were only found for deaf signers. It might well be that deaf signers become visually attuned to perceiving the specific manual contrasts of their sign language (Emmorey et al. 2003; Baker et al. 2005). Furthermore, Hildebrandt & Corina (2002) derive from the works of Eimas (1985) and Kuhl (1983) that perception is influenced by language experience, and this accounts for one’s own language as well as for unfamiliar languages. The results by Emmorey et al. (2003) may result from such language experience. The authors suggest that signers develop particular abilities to perceive language-​relevant distinctions. Nevertheless, the results of Hildebrandt & Corina (2002) show similar categorization abilities for deaf signers and hearing non-​signers. Similar findings, but with naturally produced stimuli (Emmorey et al. (2003) had used computer-​generated stimuli), are provided by Baker et al. (2005), who also report that categorization abilities seem to be independent of language experience. Emmorey et al. (2003) hypothesize that, next to the linguistic basis, there is also a perceptual basis for the investigated categories. This also explains that discontinuities in discrimination abilities between phonetic variants of one type were found regardless of language experience. Furthermore, results indicated that the discrimination was poorest in regions close to category prototypes and best for stimuli straddling category boundaries (Baker et  al. 2005). Nevertheless, Categorical Perception was only found in deaf signers and only for phonologically contrastive hand configurations. From the results of Categorical Perception experiments, Emmorey et al. (2003) derive factors which are important in these kinds of experiments: perceptual predisposition, language experience, and developmental/​maturational aspects.

35

36

Uta Benner

Best et al. (2010) investigated Categorical Perception of meaningless minimal manual contrast stimuli across participants with different knowledge of ASL (i.e., deaf native signers, deaf non-​native signers, hearing late signers, and hearing non-​signers). Even though they also were unable to find the classical effects of Categorical Perception, they found differences in the discrimination abilities among the different participant groups, and thus concluded that language experience affects the perception of phonetic variations in this context. In their study, Sehyr & Cormier (2016) examined the perception of handling handshapes in British Sign Language (BSL) by deaf signers and hearing non-​signers. They were interested in whether continua of these handshapes are perceived categorically or continuously. The results showed similar categorization abilities across the two groups but differences between groups regarding reaction time, suggesting underlying linguistic representations for deaf signers. Even though the study showed that language experience alone is not enough to result in handshape feature identification, it also provided evidence that sign language experience refines perceptual capacities. Sign languages do not only rely on manual articulation but also to a great extent on facial expressions. In sign language use, there are different kinds of facial expressions: non-​linguistic merely expressive facial expressions, which are not different from general emotional facial expressions, and linguistic facial expressions. The latter can fulfill lexical or grammatical functions. Categorical Perception has also been tested for linguistic facial expressions. However, the study by Campbell et al. (1999) failed to find evidence for Categorical Perception for linguistic facial expression (i.e., question-​ types displayed on the face), but they did find evidence for categorical judgment for the participants with sign experience. The authors assume that Categorical Perception effects may exist for linguistic facial expressions but may be quite weak. This would parallel the findings in other visual domains, which also show Categorical Perception effects, though weaker than in speech (Emmorey 2007). Campbell et al. (1999) also raise the question of the influence of language experience and language acquisition on the outcome of their experiment (for further information on language acquisition effects, see Section 2.5). Categorical Perception of affective and linguistic facial expressions has also been investigated by McCullough & Emmorey (2009). In this study, deaf signers showed Categorical Perception effects but only for linguistic facial expressions, not for affective expressions. In contrast, hearing non-​signers showed better discrimination abilities across category boundaries for both affective and linguistic facial expressions. The authors conclude that language experience influences the Categorical Perception effects for affective facial expression. Taken together, the results regarding Categorical Perception suggest that phonetic categories are organized in a way that improves and optimizes discrimination abilities in order to be able to adapt to changing situations. Additionally, the findings enforce the idea that Categorical Perception enables the perceiver to focus on the relevant parts of the input signal, irrespective of language modalities.

2.4  Linguistic experience Linguistic experience (also referred to as language experience or linguistic knowledge) does not only influence the perception of one’s own language, but also how other, including unfamiliar, languages are perceived (Hildebrandt & Corina 2002). However, effects of linguistic experience are not limited to languages only. Experience with a visual language also affects non-​linguistic visual abilities, such as discriminating and recognizing 36

37

Phonological comprehension

faces, better movement detection, as well as enhanced ability to interpret and generate complex visual mental images (Emmorey et al. 1993; Bettger et al. 1997). McCullough et al. (2005) hypothesize that long-​term experience with a sign language may affect the neural representation which is used for early visual processing of faces. The findings on Categorical Perception (see Section 2.3) already indicated that linguistic experience also plays an important role in language processing. However, not only Categorical Perception is influenced; other parts of the phonological comprehension process are influenced as well. Emmorey (2007) refers to studies that show that phonological information is even used while perceiving moving nonsense signs (Baker et al. 2005) and when viewing static signs (Emmorey et  al. 2003). Linguistic experience also influences similarity judgments (Hildebrandt & Corina 2002). Deaf signers were faster and more accurate in the classification of signs, but also in the classification of self-​grooming actions (Corina & Grosvald 2012). Hildebrandt & Corina (2002) also found that signs are judged as being highly similar when they share the phonological features movement and location. All subjects participating in their study, regardless of their linguistic experience, shared this judgment. Based on this, the authors thus conclude that movement and location have to be core structural elements in ASL. This finding is in line with further research showing a special status for the combination of the features movement and location in sign production as well as in sign comprehension (Dye & Shih (2006); Baus et al. (2014); for a slightly different picture and methodology, see Wienholz et al. (2018)). Morford & Carlson (2011) demonstrate that native signers are able to anticipate the movement of a sign on the basis of information about location and handshape features. Non-​native signers instead show little evidence to integrate phonological cues; rather, they rely solely on handshape during lexical search. In their comparative study of signed and spoken language recognition, Orfanidou et al. (2010) discovered that knowledge about well-​formedness rules is used during BSL comprehension. It was harder for participants to identify real signs in a context of nonsense signs, which were impossible BSL signs, than in contexts of possible BSL signs. This seems to be a cross-​modal effect, as listeners also use knowledge on phonological structure to parse speech input. “Listeners [also] find it harder to spot spoken words in impossible-​word than in possible-​word contexts” (Orfanidou et al. 2010: 280). These results support the Possible Word Constraint (PWC), a theory which proposes that activation of candidate words is reduced if the segmentation would result in impossible words (Norris et al. 1997). Another important aspect regarding phonological input in language comprehension is the ability to separate relevant from irrelevant input. Linguistic experience may play an important role here, but, as it seems, not during the initial earliest stages of sign and gesture recognition (Corina et al. 2011). Corina et al. (2011) nevertheless admit that experience with a sign language may still improve the categorization abilities between signs and gestures, and they hypothesize that comparing signs to non-​signs (phonologically well-​formed but non-​existing signs) may yield different results. In a confusion paradigm, in which stimuli with varying degrees of similarity were masked with noise, Benner (2012) asked participants with different linguistic experience (deaf native signers, hearing native signers, non-​native deaf and non-​native hearing signers) to categorize signs and non-​ signs as well as (self-​directed or object-​directed) actions. The results suggest that different factors may influence the participants’ categorization abilities:  language proficiency, nativeness, and stimuli type. Furthermore, Benner (2012) calculated perceptual maps in order to determine whether the different levels of sign language experience shaped the 37

38

Uta Benner

participants’ language perception. By creating perceptual maps, the perceptual space of a listener (in a general, modality-​independent sense) can be determined. For the calculation of the similarity and the perceptual distance from the confusion matrix, Shepard’s method was used (Shepard (1972), as cited in and explained by Johnson (2003)). The results (shown in Figures 2.1–​2.3) clearly show the influence of linguistic experience on perception. Smaller perceptual maps indicate less clear perception (Johnson 2003). The participant group with the highest linguistic experience (native signers) has the largest perceptual map. Less experience leads to smaller perceptual spaces (hearing native signers slightly outperform deaf non-​native signers, who, for their part, outperform hearing non-​native signers).

Figure 2.1  Perceptual map for native and non-​native participants. The open circle indicates the position of non-​signs on an action-​sign axis (© Uta Benner)

Action

Action

5.47

3.48

Sign

3.03 4.39

3.74

2.11

Non-sign Non-sign

native deaf native hearing

Figure 2.2  Perceptual map for native deaf and native hearing participants. The open circle indicates the position of non-​signs on an action-​sign axis (© Uta Benner)

38

39

Phonological comprehension

Figure 2.3  Perceptual map for non-​native deaf and non-​native hearing participants. The open circle indicates the position of non-​signs on an action-​sign axis (© Uta Benner)

It is, however, not surprising that initial stages of perception are independent of language, as signers need to be attuned to non-​linguistic actions in order to be able to detect linguistic movement (Corina et  al. 2007). Furthermore, in sign language, the standard perceptual system used to process human action in general is used. Nevertheless, experience with sign language may still lead to modification of this general perception system (Corina et al. 2011). Coming back to phonological processing, while deaf signers are sensitive to all phonological parameters which are linguistically relevant, hearing non-​signers are only influenced by perceptual qualities of sign (Hildebrandt & Corina 2002). The question is to what extent a delayed exposure to sign language influences similarity judgments. It is the case that late learners rely on different strategies in language processing that depend on handshape saliency (Hildebrandt & Corina 2002). Morford et al. (2008) put forward that non-​native signers are overly accurate in sign language processing as they do not know which elements are linguistically relevant. Based on their research, they propose three possibilities for how language experience influences perception. First, the location of perceptual boundaries between phonological contrasts is influenced by language experience; second, sensitivity to perceptual boundaries is influenced; and third, the sensitivity to within-​category phonetic variation is affected. They further suggest that sign language experience does not lead to maintenance of phoneme contrasts but rather to a loss of sensitivity to phonetic variability.

2.5  Acquisition perspectives In language processing, many different aspects have to fall in place. So far, only a few have been identified. As presented above, language experience and linguistic knowledge play an important role on the perceiver’s side. Perceiving language shapes the general perception system. However, not only linguistic experience affects processing; over the past years, studies have provided evidence that the fundamental facet of language experience, that is, language acquisition, influences the perception and comprehension of language. There is evidence that during the first year of life, the perceptual system is re-​organized. As a result, the child is attuned to the surrounding language, leading to a change in the

39

40

Uta Benner

child’s discrimination abilities (Werker & Tees 1984; Kuhl 2010; Narayan et  al. 2010). This perceptual narrowing has also been identified for ASL (Palmer et al. 2012). Baker et  al. (2006) found that four-​month-​old babies show categorical discrimination abilities for phonetic handshape, whereas 14-​month-​old babies do not. They point out that the abilities of the four-​month-​old babies resemble the Categorical Perception effect in deaf native adult signers. These results support the hypothesis that categorization abilities seem to be innate and independent of language experience. However, the perceptual system adapts to the surrounding language, thus optimizing the available resources. As for spoken languages, it appears that age of acquisition has an effect on future language performance, with earlier acquisition resulting in better use of attention resources. The earlier in life someone is exposed to a specific sign language, the better are their perceptual abilities to spot linguistically relevant variations in a language signal (Morford et al. 2008). Already early on, children demonstrate abilities to discriminate language from other sensory input. This is also true for sign languages. Krentz & Corina (2008), for instance, have shown that six-​month-​old babies prefer sign language to non-​linguistic pantomime videos. It was not possible to replicate this result with 10-​month-​old babies. Investigating acquisitional aspects in sign languages is not only interesting with respect to cross-​modality effects but also regarding consequences of late language exposure. Deaf individuals have a higher risk to be deprived of early language acquisition. Morford et al. (2008) report that more than 90% of deaf individuals are prevented from exposure to a sign language from birth due to the social and demographic setting they live in (most deaf children are born to hearing parents with no sign language knowledge). Children with early exposure to a sign language, however, pass through the same developmental stages as children acquiring spoken language (Lillo-​Martin 2009). Mayberry & Kluender (2018) provide an extensive overview of the difference of age of acquisition effects for first vs. second language outcome. A considerable number of studies investigates the consequences of late language exposure. The results show that late acquisition of a first language has effects on all linguistic levels, on phonology and lexicon as well as on syntax (Mayberry 2007). This has a direct impact on the domain of processing abilities. In particular, an increased age of acquisition leads to an increased recognition time for signs. Mayberry (2007) reports different findings related to the phenomenon of delayed first learning acquisition and hypothesizes that phonology serves as an organizing structure in early learners of a language. This organizing structure appears to be missing in late learners, thus leading to enhanced effort and, as a result, to increased recognition time in language comprehension. Knowing a language is highly connected with processing that language. According to Mayberry & Fischer (1989:  753), knowing a language implies having the ability to “see through its phonological shape to its lexical meaning”. The ability to comprehend and produce sign language in adulthood is thus reduced by late exposure to a language (Boudreault & Mayberry 2006). Successful language acquisition (regardless of whether it is first or second language acquisition) is determined by the age of acquisition because an early exposure to a language tunes the visual system for phonology (Mayberry 2007). This is regardless of language modality: phonology seems to be an abstract mental construct that is not bound to a specific sensory-​motor modality of language. Rather, the organization of the mental lexicon along phonological and semantic dimensions of words appears to be modality-​independent. Furthermore, both signers and speakers engage in phonological analysis during lexical access. One stage of sign recognition is phonological processing, which seems to be sensitive to the age of first language acquisition. Therefore, phonological processing is 40

41

Phonological comprehension

an abstract and supramodal stage of language processing (Mayberry & Witcher 2005). That phonology serves as an organizing structure can also be seen in sign production, e.g., in ‘slip of the hands’ (Mayberry 2007). Phonological processing appears to be especially affected by late exposure to a language. The consequences thus might vary and differently affect different stages and components of linguistic processing (Emmorey et al. 1995). Yet, the special role of phonology during comprehension does not mean that late learners are not sensitive to phonological structure. However, since they do not know which features to look for, their sensitivity to the phonological structure hinders their processing rather than supporting sign recognition (Mayberry 2007). It may also be that the lexical neighborhood in late learners is denser resulting in slower recognition (Mayberry & Witcher 2005). This would be in line with the findings of Benner (2012), who reports that late learners do have a smaller perceptual space as compared to native signers. Hall et al. (2012) go even further. Based on their research, they suggest that early exposure to any language (irrespective of modality) provides a person with the ability to discern which aspect of the incoming signal actually matters for language comprehension. In their study on phonological similarity judgment, they found that hearing participants who had learned sign language as a second language patterned with deaf native signers rather than with deaf non-​native signers. Early language exposure thus seems to facilitate acquisition of a second phonological system (Hall et al. 2012).

2.6  Coarticulatory effects Coarticulation, whereby the articulation of one speech sound varies depending on the surrounding sounds, is regarded as one of the challenges in speech perception. Coarticulatory effects add to the lack of invariance in the language signal, and they have also been described for sign languages. However, in sign languages, these effects are usually spread over longer durations and affect different aspects of one sign. Depending on the context, the appearance of one sign can therefore change significantly as the phonological parameters vary (Yang & Sarkar 2006). Grosvald & Corina (2012) studied long-​ distance coarticulation in ASL and English. In ASL, they found only little evidence for coarticulatory sensitivity, whereas they found great sensitivity in English. They offer as explanation that the visual modality allows more direct perceptual access to the relevant articulators. Therefore, a closer link between the language structure and the actual signal seems to exist in the visual modality. The authors further argue on the basis of one of their earlier studies (Corina et  al. 2011) that sign languages might not present us with a real lack of invariance problem. The question remains open to what extent somatosensory feedback during articulation affects the phenomenon of coarticulation. There are results suggesting that somatosensory information may impact sign processing (Emmorey et al. 2009). Alas, only little research has been done in this domain so far.

2.7  Conclusion For successful parsing of language input, several pieces have to fall into place. One of the basic factors in language processing is phonology. Any disruption at this level may affect the subsequent stages of sign language comprehension (Morford et al. 2008). Research on the different aspects of and factors involved in phonological processing has provided us with results that show how languages in general, and sign languages in particular, are parsed. Some characteristics of sign language processing are dependent on language 41

42

Uta Benner

modality, whereas others are not. Sign languages rely less on serial linguistic distinctions since the visual domain affords the perception of simultaneous information, and a lot of phonological information is already available early on (Emmorey 2002; Emmorey 2007). Studies on Categorical Perception show that irrespective of language input, phonetic categorizations are in place in order to facilitate discrimination. Categorical Perception supports the ability to focus on relevant language characteristics. This aptitude is also facilitated by language experience and linguistic knowledge. Knowledge of phonological structure enables sign language users to parse language input during comprehension (Orfanidou et al. 2010). Thus, relevant input can easily be separated from irrelevant one. However, this ability is influenced by the age of acquisition. Late language exposure impedes language comprehension, as the ability to focus on relevant aspects for language comprehension is minimized (Hall et al. 2012). It thus appears that language is processed in a uniform way across modalities. Nevertheless, many aspects still remain unknown.

References Baker, Stephanie A., William J. Idsardi, Roberta Michnick Golinkoff, & Laura-​Ann Petitto. 2005. The perception of handshapes in American Sign Language. Memory and Cognition 33(5). 887–​904. Baker, Stephanie A., Roberta Michnick Golinkoff, & Laura-​Ann Petitto. 2006. New insights into old puzzles from infants’ categorical discrimination of soundless phonetic units. Language Learning and Development: The Official Journal of the Society for Language Development 2(3). 147–​162. Baus, Cristina, Eva Gutiérrez, & Manuel Carreiras. 2014. The role of syllables in sign language production. Frontiers in Psychology 5. Bellugi, Ursula & Susan Fischer. 1972. A comparison of sign language and spoken language. Cognition 1(23). 173–​200. Benner, Uta E. 2012. Phonological processing of German Sign Language. Stuttgart: University of Stuttgart PhD dissertation. Best, Catherine T., Gaurav Mathur, Karen Miranda, & Diane Lillo-​Martin. 2010. Effects of sign language experience on categorical perception of dynamic ASL pseudosigns. Attention, Perception, & Psychophysics 72(3). 747–​762. Bettger, Jeffrey G., Karen Emmorey, Stephen H. McCullough, & Ursula Bellugi. 1997. Enhanced facial discrimination:  Effects of experience with American Sign Language. Journal of Deaf Studies and Deaf Education 2(4). 223–​233. Boudreault, Patrick & Rachel I. Mayberry. 2006. Grammatical processing in American Sign Language: Age of first-​language acquisition effects in relation to syntactic structure. Language and Cognitive Processes 21(5). 608–​635. Boyes Braem, Penny. 1995. Einführung in die Gebärdensprache und ihre Erforschung (3rd ed.). Hamburg: Signum-​Verlag. Calder, Andrew J., Andrew W. Young, David I. Perrett, Nancy L. Etcoff, & Duncan Rowland. 1996. Categorical perception of morphed facial expressions. Visual Cognition 3(2). 81–​118. Campbell, Ruth, Bencie Woll, Philip J. Benson, & Simon B. Wallace. 1999. Categorical perception of face actions: their role in sign language and in communicative facial displays. The Quarterly Journal of Experimental Psychology Section A, Human Experimental Psychology 52(1). 67–​95. Corina, David P. & Michael Grosvald. 2012. Exploring perceptual processing of ASL and human actions: Effects of inversion and repetition priming. Cognition 122(3). 330–​345. Corina, David P., Michael Grosvald, & Christian Lachaud. 2011. Perceptual invariance or orientation specificity in American Sign Language? Evidence from repetition priming for signs and gestures. Language and Cognitive Processes 26(8). 1102–​1135. Corina, David P., Yi-​Shiuan Chiu, Heather Knapp, Ralf Greenwald, Lucia San Jose-​Robertson, & Allen Braun. 2007. Neural correlates of human action observation in hearing and deaf subjects. Brain Research 1152. 111–​129.

42

43

Phonological comprehension Dye, Matthew W.G. & Shui-​I Shih. 2006. Phonological priming in British Sign Language. In Louis Goldstein, Douglas H. Whalen, & Catherine T. Best (eds.), Laboratory Phonology 8, 243–​263. Berlin: Mouton de Gruyter. Eimas, Peter D. 1985. The perception of speech in early infancy. Scientific American 252(1).  34–​40. Emmorey, Karen 2002. Language, cognition, and the brain:  Insights from sign language research. Mahwah, NJ: Lawrence Erlbaum. Emmorey, Karen. 2007. The psycholinguistics of signed and spoken languages: How biology affects processing. In M. Gareth Gaskell (ed.), The Oxford handbook of psycholinguistics, 703721. Oxford & New York: Oxford University Press. Emmorey, Karen, Ursula Bellugi, Angela Friederici, & Petra Horn. 1995. Effects of age of acquisition on grammatical sensitivity:  Evidence from on-​ line and off-​ line tasks. Applied Psycholinguistics 16(1). 123. Emmorey, Karen, Rain Bosworth, & Tanya Kraljic. 2009. Visual feedback and self-​monitoring of sign language. Journal of Memory and Language 61(3). 398–​411. Emmorey, Karen & David Corina. 1990. Lexical recognition in sign language: Effects of phonetic structure and morphology. Perceptual and Motor Skills 71(3). 1227–​1252. Emmorey, Karen, Stephen M. Kosslyn, & Ursula Bellugi. 1993. Visual imagery and visual-​spatial language: Enhanced imagery abilities in deaf and hearing ASL signers. Cognition 46(2). 139–​181. Emmorey, Karen, Stephen McCullough, & Diane Brentari. 2003. Categorical perception in American Sign Language. Language and Cognitive Processes 18(1). 21–​45. Etcoff, Nancy L. & John J. Magee. 1992. Categorical perception of facial expressions. Cognition 44(3). 227–​240. Fischer, Susan D, Lorraine A. Delhorne, & Charlotte Reed. 1999. Effects of rate of presentation on the reception of American Sign Language. Journal of Speech Language and Hearing Research 42. 568–​582. Grosjean, François. 1980. Spoken word recognition processes and the gating paradigm. Perception & Psychophysics 28. 267–​283. Grosjean, François. 1981. Sign and word recognition: A first comparison. Sign Language Studies 32. 195–​220. Grosvald, Michael & David P. Corina. 2012. The perceptibility of long-​distance coarticulation in speech and sign: A study of English and American Sign Language. Sign Language & Linguistics 15(1). 73–​103. Hall, Matthew L., Victor S. Ferreira, & Rachel I. Mayberry. 2012. Phonological similarity judgments in ASL: Evidence for maturational constraints on phonetic perception in sign. Sign Language & Linguistics 15(1). 104–​127. Harnad, Stevan. 1987. Psychophysical and cognitive aspects of categorical perception: A critical overview. In Stevan Harnad (ed.), Categorical perception: The groundwork of cognition, 1–​52. New York: Cambridge University Press. Hildebrandt, Ursula & David Corina. 2002. Phonological similarity in American Sign Language. Language and Cognitive Processes 17(6). 593–​612. Johnson, Keith. 2003. Acoustic and auditory phonetics (2nd ed.). Malden, MA: Blackwell. Krentz, Ursula C. & David P. Corina. 2008. Preference for language in early infancy: The human language bias is not speech specific. Developmental Science 11(1). 1–​9. Kuhl, Patricia K. 1983. Perception of auditory equivalence classes for speech in early infancy. Infant Behavior and Development 6(2). 263–​285. Kuhl, Patricia K. 2010. Brain mechanisms in early language acquisition. Neuron 67(5). 713–​727. Liberman, Alvin M., Franklin S. Cooper, Donald Shankweiler, & Michael Studdert-​Kennedy. 1967. Perception of the speech code. Psychological Review 74(6). 431–​461. Liberman, Alvin M. 1957. The discrimination of speech sounds within and across phoneme boundaries. Journal of Experimental Psychology 54(5). 358–​368. Lillo-​Martin, Diane. 2009. Sign language acquisition studies. In Edith L. Bavin (ed.), The Cambridge handbook of child language, 399–​416. Cambridge: Cambridge University Press. Massaro, Dominic W. 1987. Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, NJ: Lawrence Erlbaum. Mayberry, Rachel I. 2007. When timing is everything: Age of first-​language acquisition effects on second language learning. Applied Psycholinguistics 28(3). 537–​549.

43

44

Uta Benner Mayberry, Rachel I. & Susan D. Fischer. 1989. Looking through phonological shape to lexical meaning: The bottleneck of non-​native sign language processing. Memory and Cognition 17(6). 740–​754. Mayberry, Rachel I. & Robert Kluender. 2018. Rethinking the critical period for language: New insights into an old question from American Sign Language. Bilingualism:  Language and Cognition 21(5). 886–​905. Mayberry, Rachel I. & Pamela Witcher. 2005. What age of acquisition effects reveal about the nature of phonological processing. Center for Research on Language Technical Report 17(3). McCullough, Stephen & Karen Emmorey. 2009. Categorical perception of affective and linguistic facial expressions. Cognition 110(2). 208–​221. McCullough, Stephen, Karen Emmorey, & Martin Sereno. 2005. Neural organization for recognition of grammatical and emotional facial expressions in deaf ASL signers and hearing nonsigners. Cognitive Brain Research 22(2). 193–​203. Morford, Jill P. & Martina L. Carlson. 2011. Sign perception and recognition in non-​native signers of ASL. Language Learning and Development 7(2). 149–​168. Morford, Jill P., Angus B. Grieve-​Smith, James MacFarlane, Joshua Staley, & Gabriel Waters. 2008. Effects of language experience on the perception of American Sign Language. Cognition 109(1).  41–​53. Narayan, Chandan R., Janet F. Werker, & Patrice Speeter Beddor. 2010. The interaction between acoustic salience and language experience in developmental speech perception: Evidence from nasal place discrimination. Developmental Science 13(3). 407–​420. Norris, Dennis, James M. McQueen, Anne Cutler, & Sally Butterfield. 1997. The possible-​word constraint in the segmentation of continuous speech. Cognitive Psychology 34. 191–​243. Nusbaum, Howard C. & Daniel Margoliash. 2009. Language processes. In Gary G. Berntson & John T. Cacioppo (eds.), Handbook of neuroscience for the behavioral sciences, 879–​904. Hoboken, NJ: Wiley. Orfanidou, Eleni, Robert Adam, Gary Morgan, & James M. McQueen. 2010. Recognition of signed and spoken language:  Different sensory inputs, the same segmentation procedure. Journal of Memory and Language 62(3). 272–​283. Palmer, Stephanie Baker, Laurel Fais, Roberta Michnick Golinkoff, & Janet F. Werker. 2012. Perceptual narrowing of linguistic sign occurs in the 1st year of life: Perceptual narrowing of linguistic sign. Child Development 83(2). 543–​553. Sehyr, Zed Sevcikova & Kearsy Cormier. 2016. Perceptual categorization of handling handshapes in British Sign Language. Language and Cognition 8. 501–​532. Shepard, Roger N. 1972. Psychological representation of speech sounds. In Edward E. David & Peter B. Denes (eds.), Human communication: A unified view, 67–​113. New York: McGraw-​Hill. Tyrone, Martha E. & Claude E. Mauk. 2010. Sign lowering and phonetic reduction in American Sign Language. Journal of Phonetics 38(2). 317–​328. Werker, Janet F. & Richard C. Tees. 1984. Cross-​language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior and Development 7. 49–​63. Wienholz, Anne, Derya Nuhbalaoglu, Nivedita Mani, Annika Herrmann, Edgar Onea, & Markus Steinbach. 2018. Pointing to the right side? An ERP study on anaphora resolution in German Sign Language. PLoS ONE 13(9). e0204223. Yang, Ruiduo & Sudeep Sarkar. 2006. Detecting coarticulation in sign language using conditional random fields. Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06) 2. IEEE Computer Society Washington, DC. 108–​112.

44

45

3 LEXICAL PROCESSING IN COMPREHENSION AND PRODUCTION Experimental perspectives Eva Gutiérrez-​Sigut & Cristina Baus 3.1  Introduction The study of sign language has received increasing interest in the last decades. As an example, in 2000, only 30 published articles included ‘sign language’ in their title or topic. In 2015, this number had increased to 285 articles. Within this growing field, research on sign language processing –​including both comprehension and production –​has also received a remarkable boost in recent years. This is to a large extent due to technical developments and the extensive use of various neuroimaging techniques to study commonalities and differences between speech and sign processing. The comparison of the behavioral and neural correlates of speech and sign processing has allowed us to identify the components of a modality-​independent language network. This finding, in turn, contributed –​together with linguistic studies –​to the acceptance of sign languages as natural languages and not mere pantomime. In the initial stages of research, efforts were concentrated on demonstrating universal aspects of language processing; thus, little attention was paid to the differences between modalities or to the specific aspects of the sign-​modality. However, the wide recognition of sign languages as natural languages has supported a greater interest in furthering our understanding of modality-​specific factors (e.g., the use of proprioceptive and spatial information for phonological encoding or the greater potential for iconicity). This chapter offers a comprehensive overview of the most relevant studies of sign language comprehension and production that focus on the lexical level of processing. Our aim is to determine how similarities and differences between sign and speech processing are reflected in behavioral data and at the brain level. With this purpose in mind, we review the contributions of a large pool of studies, including those employing more classic offline methodologies (e.g., study of spontaneous errors) as well as recent studies using online tasks and neuroimaging techniques. One major goal of studies of sign language processing has been to understand the psychological reality of signs and their formational parameters, as well as the biological basis of the linguistic constructs. For many years, much attention has been paid to the identification of modality-​independent mechanisms, taking advantage of the fact that 45

46

Eva Gutiérrez-Sigut & Cristina Baus

sign languages provide unique insight into universal aspects of language processing. Researchers have investigated whether sign languages are sensitive to the same linguistic phenomena (e.g., lexicality, priming of formal features) and sustained by the same brain networks as the spoken modality. More recently, the study of modality-​specific aspects of sign comprehension and production has received increasing attention (e.g., iconicity, tighter relationships between form and meaning, etc.). In this chapter, we review the main behavioral and neuroimaging findings that contribute to our understanding of the lexical processing of signs from the perspectives of sign comprehension and sign production. We start by considering whether neural changes due to sensory deprivation alone can account for sign language processing patterns found in the deaf. Next, we discuss how processing of signs differs from other kinds of non-​linguistic movement and how it is similar to spoken language processing. We then address several lexical effects in comprehension and production (e.g., lexicality, lexical frequency, and semantic priming effects), as well as how properties of sublexical units –​phonological parameters and syllables –​ affect the lexical access of signs. Finally, we consider how sign and speech processing are linked by examining studies that show cross-​linguistic influences between languages from different modalities. By including brain-​lesion and neuroimaging findings as well as behavioral results, we aim to offer a richer picture of lexical access to signs. While behavioral measures give a late measure of processing, neuroimaging techniques allow us to explore the brain substrates of sign comprehension and production. In this way, we are able to answer different but complementary questions. Electroencephalography (EEG) and magnetoencephalography (MEG) offer a very precise temporal resolution. These techniques have proved to be a useful resource to study how cognitive processes unfold over time and whether this is similar during word and sign processing. In contrast, positron emission tomography (PET), functional magnetic resonance imaging (fMRI) and to a certain degree functional near-​infrared spectroscopy (fNIRS) offer a very good spatial resolution to compare the brain areas underlying sign and word processing between modalities. Finally, the signing population is very diverse. The pursuit of a reduced variability among participants has led many studies to focus on a very small group of participants: deaf native signers. In addition to the study of deaf native signers, researchers have contrasted sign processing in various groups of signers. The use of this comparative approach has allowed us to explore the effects of a number of factors on lexical processing, depending on which characteristics are similar or different between groups. For instance, the comparison between native and late deaf signers has provided evidence of the impact of age of acquisition (AoA) in sign processing. Similarly, hearing signers (as a type of bimodal bilinguals) have offered a unique window into the nature of language co-​activation in bilinguals. Throughout this chapter, we report findings of sign processing in various types of signers as well as similarities/​differences between groups.

3.2  Deafness, plasticity, and the language network One initial question in sign language processing is whether the findings  –​or which findings –​are mainly a result of the sensory deprivation caused by deafness or, rather, are attributable to the linguistic aspects of signs. To address this question, researchers have studied the effects of cross-​modal plasticity on language processing. Cross-​modal plasticity refers to the cortical reorganization of deprived brain areas –​auditory areas, in the case of deafness –​for processing another sensory modality, e.g., the visual modality. In a 46

47

Lexical processing

MEG study, Leonard et al. (2012) compared brain activation in hearing non-​signers and deaf signers triggered by spoken words and signs, respectively. Results showed modality-​ specific early sensory processing (100 ms after stimulus onset), where the auditory cortex was involved in speech and visual brain areas were involved in sign. However, the later lexico-​semantic processing (300 to 500 ms after stimulus onset) lead to activation of left-​ lateralized fronto-​temporal areas (the language network) in both modalities, suggesting the existence of a modality-​independent language network. Similarly, Cardin et al. (2013) showed that plastic changes in the left temporal areas (e.g., left Superior Temporal Cortex, left STC) have a linguistic origin, while changes in their right counterpart (right STC) are tied to both linguistic processing and sensory deprivation. More recently, Cardin et al. (2016) furthered this finding by showing that during a low-​level phoneme-​monitoring task, signers and non-​signers engage different sensorimotor neural networks for diverse phonological parameters of signs. However, signers perform linguistic processing of both handshape and location in the same brain areas. Taken together, these results indicate that the involvement of the language network during sign processing is not merely due to auditory deprivation and the cross-​modal rewiring, but rather to sign language experience (see also Neville et al. (1997) and Newman et al. (2002)). It is worth noting that lesion and neuroimaging studies have shown that sign language comprehension and production appear to rely on a left-​lateralized network that is strikingly similar to the language network involved in spoken language processing (Damasio et  al. 1986; Bellugi et  al. 1989; Corina 1999; Corina et  al. 1999; MacSweeney, Capek et al. 2008; MacSweeney, Waters, et al. 2008; Newman et al. 2015). However, although neural networks supporting speech and sign processing are very similar, there are specific requirements of the modality that should be taken into account (see, e.g., Neville et al. (1997); Neville et al. (1998), for a similar conclusion in an event related potential (ERP) experiment). In addition to analogous activation in the classical left perisylvian areas for sign and speech, previous neuroimaging studies have identified the left parietal lobe as having a greater role in signed than spoken language processing (for a review, see MacSweeney, Capek et al. (2008); MacSweeney, Waters, et al. (2008); Corina et al. (2012); Emmorey et al. (2014)). In the following section, we first compare the effects found for sign language processing to those found for other types of body movements, and then detail modality-​ independent and some modality-​specific aspects of sign processing.

3.3  Sign processing 3.3.1  Signs vs. body movements and gestures While the motoric and articulatory demands of sign and speech are dramatically different, signing shares articulators with other human movements, including different types of gestures (e.g., meaningless gestures such as scratching one’s nose, co-​speech gestures, emblems, complex pantomimes, etc.). As many body movements, lexical signs consist of precise movements of the face, torso, arms, and hands. However, unlike other types of motoric actions, signs make use of a structured set of handshapes, locations, and movements, as well as hand orientations and non-​manual features. This raises the question of whether the effects observed are linguistic in nature or somehow influenced by processing of non-​linguistic movements. To tackle this question, Gutiérrez-​Sigut, Daws et al. (2015) measured blood flow velocities to both brain hemispheres in hearing native 47

48

Eva Gutiérrez-Sigut & Cristina Baus

signers during both British Sign Language (BSL) and English production. Lateralization strength was compared to that of hearing non-​signers repeating sign-​like movements (non-​linguistic). Results showed a stronger left lateralization for BSL than English production that could not be accounted for exclusively by the constant movement of the right hand, inherent to signing. This finding was also replicated with deaf native signers (Gutiérrez-​Sigut, Payne et al. 2015), who also showed stronger laterality indices for sign than for speech. This stronger lateralization was found in both covert and overt tasks and hence not due to motor movement alone. This result replicates previous findings that linguistic processing of signs is performed in a left-​lateralized network. Furthermore, it indicates that the increased left lateralization found may be driven by specific properties of sign production such as the increased use of self-​monitoring mechanisms or the nature of the phonological encoding of signs. The fact that signing is different from gesture and pantomime was documented early on by sign language researchers (see Klima & Bellugi 1979). For instance, lesion studies described cases of aphasia in deaf people with preserved meaningful gesture comprehension and production after a left frontotemporoparietal lesion (Corina et  al. 1992; Marshall et al. 2004). More recent research has furthered our knowledge of how processing of signs differs from gesture processing (see Goldin-​Meadow & Brentari (2015) for a review). Comprehension of lexical signs and gestures may share some of the earliest perceptual processing mechanisms. In a series of behavioral studies, Corina and colleagues (Corina et  al. 2011; Corina & Grosvald 2012) investigated the early stages of sign and human action processing. Reaction times were measured while deaf signers and hearing non-​ signers decided whether the actions displayed on the screen were a lexical sign or a gesture. Results showed that signers were better than non-​signers at discriminating signs and actions. However, perceptual variations in the stimuli affected both groups similarly, suggesting the use of the same general perceptual systems to recognize signs and other human actions. However, later stages of processing seem to engage different mechanisms. Grosvald et al. (2012) examined ERP responses to sentence endings that could be non-​linguistic gestures (e.g., scratching one’s face), signs, or pronounceable non-​signs. Results showed that while linguistic endings modulated the amplitude of the N400 component (associated with semantic processing, see more details below), gestures elicited a longer lasting positivity that started in the same time window. Although more research is needed, electrophysiological evidence suggests the existence of an early filtering mechanism based on semantic properties (see Grosvald et  al. 2012) that allows us to discriminate linguistic items from gestures. ERP findings are also consistent with other neuroimaging studies that have found differences in brain activation for signs and gestures in deaf signers (MacSweeney et al. 2004; Corina et al. 2007; Emmorey et al. 2010; Newman et al. 2015). MacSweeney et al. (2004) found greater activation in the left posterior superior temporal sulcus and gyrus for sign language than Tic Tac (manual-​brachial signaling code without phonological structure), extending into the supramarginal gyrus. This difference in activation was found in signers, but not in non-​signers. In a recent study, Newman et  al. (2015) found that the classic left-​hemisphere language areas were activated more strongly for American Sign Language (ASL) than for meaningful gestures in native signers. Crucially, this activation was different from that of non-​signers, who mostly engaged brain regions involved in human action perception for both ASL and gesture. To sum up, it seems clear that linguistic processing of signs –​at 48

49

Lexical processing

the behavioral and brain levels –​is different from that of other human movements with varying degrees of meaning. Nonetheless, it is also worth noting that the line between body movement and sign processing is not clearcut. For instance, there seem to be differences between the way in which signers and speakers gesture (see, e.g., Emmorey 2002). Additionally, it might be the case that sign language experience could facilitate a ‘language-​like analysis of gestures’ (Newman et al. 2015).

3.3.2  A few notes about lexical access in comprehension and production Lexical access refers to the process by which a word or sign representation is accessed from the mental lexicon, either to comprehend or to produce it. The main difference between word comprehension and production is the direction in which information flows through the different levels of processing. Broadly speaking, word recognition goes from the word form to phonological and semantic processing, while word production goes in the opposite direction, from the concept to the utterance. Accordingly, phonological and semantic effects take place at different times. Note that lexical access is assumed to be a competitive process that is comparable in comprehension and production. However, differential effects often reflect the distinction between processing streams. For instance, while accessing the word cat, the activation of a similar concept (e.g., dog) would result in facilitation in the case of word recognition but in interference in a production task. These differential effects should be taken into account when comparing comprehension and production results.

3.3.3  Lexicality, lexical frequency, and semantic effects in sign comprehension Recognition of signs is faster than recognition of spoken words. Gating studies in which participants see increasingly longer segments of a sign until they can identify it correctly have shown that signs are accurately recognized after seeing only 35% of the sign (Emmorey & Corina 1990). Spoken words, however, need approximately 83% of the word displayed before participants can accurately recognize them (Grosjean 1980). It has been proposed that sublexical features specific to signs –​such as the large amount of form-​ based information available from the beginning of the sign and the highly constrained phonotactic structure –​contribute to this faster identification of signs. Despite these faster recognition times, the processing of signs shows remarkable similarities with speech processing when well-​established lexical effects, such as lexicality, lexical frequency, and semantic priming, are considered. However, studies have also shown subtle differences in processing that might correspond to modality-​ specific features (Gutiérrez et al. 2012a; Hosemann et al. 2013; Marshall et al. 2014). The lexicality effect is often measured using a lexical decision task in which participants are instructed to answer whether the item displayed on the screen is a legitimate sign as accurately and fast as possible. Non-​signs are constructed by varying one or more phonological parameters of a lexical sign (see van der Hulst & van der Kooij, Chapter  1). Depending on the nature of the study, the resulting non-​sign could be a pronounceable sign form (i.e., a pseudo-​sign) or an unpronounceable form. Therefore, non-​signs lack a representation in the signer’s lexicon. Signs, like words, are recognized faster and more accurately than non-​signs, the so-​called ‘lexicality effect’ (cf. Emmorey 1991; Emmorey & Corina 1993; Carreiras et al. 2008; Gutiérrez & Carreiras 2009; Dye et al. 2016). Gutiérrez & Carreiras (2009) found faster and more accurate responses to signs than non-​signs during lexical 49

50

Eva Gutiérrez-Sigut & Cristina Baus

decision-​making and visual similarity judgment tasks (which do not require linguistic processing). These findings indicate that participants undergo an automatic linguistic processing of signs. Emmorey (1991) found that repetition priming does not facilitate recognition of non-​signs, unlike the case of signs. Emmorey (1995) also showed that non-​signs that are more similar to real signs take longer to reject. Electrophysiological responses to pronounceable non-​signs are also parallel to those typically elicited by non-​ words. Specifically, an increased N400 (see details below) in centro-​parietal electrode sites has been found for pronounceable non-​signs when compared to signs in studies using single signs (Gutiérrez & Carreiras 2009) and sentences (Grosvald et al. 2012). The ‘frequency effect’ refers to the finding that more frequent words are recognized faster and more accurately than less frequent ones. In spoken languages, frequency counts are based on corpora studies (e.g., Kučera & Francis 1967). More recently, technological developments have made the flourishing of databases for signs possible (Fenlon et al. 2014; Caselli et al. 2016; Gutiérrez-​Sigut et al. 2016; see Johnston (2012) for a review), which would facilitate the operationalization of relevant psycholinguistic parameters in the near future. However, this work is still taking its initial steps, and normative frequency counts are not yet available for sign languages. Studies so far have used a subjective measure of familiarity, based on deaf signers’ ratings of how familiar a sign is. Lexical familiarity is strongly correlated to lexical frequency in spoken languages (e.g., Gernsbacher 1984; Balota et al. 2001). Sign language studies have found robust familiarity effects; that is, familiar signs are recognized faster than less familiar signs (Emmorey & Corina 1993; Mayberry & Witcher 2005; Carreiras et al. 2008; Ferjan Ramírez et al. 2016). Another well-​established finding in spoken language is the ‘semantic priming effect’ (Meyer & Schvaneveldt 1971). In a typical priming experiment, participants are asked to perform a task on a target item that is preceded by several types of primes. The display time of the primes and the interval between the presentation of prime and target depend on the stage of processing under study (see Carreiras (2010) for a discussion of the possible influence of different stimulus onset asynchronies (SOAs) on sign processing). Typically, responses to a condition in which prime and target are unrelated are compared to an experimental condition in which prime and target hold some kind of relationship, in this case semantic. Differences in processing of one particular target in different conditions are attributed to the influence of the prime. In the case of semantic priming, activation of the prime is thought to result in the spread of activation to semantically related concepts. Hence, when a semantically related target is displayed, it is more accessible than an unrelated target. There is abundant evidence for facilitatory semantic priming in speech processing (see, e.g., Meyer & Schvaneveldt 1971; Neely 1977; Becker 1980; de Groot 1983). Behavioral studies of sign processing have shown that signs (e.g., DOG ) are recognized faster when they are preceded by a semantically related sign (e.g., C AT than by an unrelated prime (e.g., PEN ; see Emmorey & Corina 1993; Mayberry & Witcher 2005; Gutiérrez & Carreiras 2009; Bosworth & Emmorey 2010). Electrophysiological studies provide complementary evidence of the time course of semantic processing of signs. Most studies have focused on the N400 component, a negative shift in the ERP brainwave that peaks approximately 400 ms after stimulus onset and is larger over centro-​parietal electrode sites. It has been widely demonstrated that the N400 is sensitive to semantic relatedness, both within prime-​target pairs and in sentence contexts (Kutas & Hillyard 1980; Holcomb & Neville 1990; Van Petten & Kutas 1990; Van Petten & Kutas 1991; Bentin et  al. 1993). A  larger N400 amplitude for incongruent than for correct sentence endings has been observed for several sign languages (ASL: Neville et al. 50

51

Lexical processing

1997; Capek et al. 2009; Grosvald et al. 2012; Spanish Sign Language (LSE): Gutiérrez et  al. 2012a; German Sign Language (DGS):  Hosemann et  al. 2013; Hänel-​Faulhaber et  al. 2014). Taken together, these findings indicate that semantic processing is essentially modality-​independent. Results are consistent with other neuroimaging studies that highlight the similarities in the neural networks used for speech and sign processing (e.g., MacSweeney, Capek et al. 2008; MacSweeney, Waters, et al. 2008). Still, there are subtle differences that suggest some representational differences in the composition of the sign lexicon. For example, several studies have found that the N400 to ­semantically incongruent signs had a later onset and peak than what is usually found for spoken languages (Neville et al. 1997; Capek et al. 2009; Grosvald et al. 2012). Some properties of signs, such as having more diverse recognition points and lengthier transition times might contribute to the explanation of these differences (Neville et  al. 1997; Capek et  al. 2009). A  few studies have specifically investigated how the interplay between semantic and phonological properties affects the time course of sign recognition. Hosemann et al. (2013) found that handshape information accessible in the transition phase affects the onset of the N400 for semantically incongruent signs. Moreover, the interplay between semantic information and the sign’s handshape further influences the timing of this effect. That is, when the handshape of the critical word classifies the subject according to semantic characteristics (e.g., action verbs such as JUMP or STAND) the effect on the brainwave is found earlier than when the handshape does not carry semantic information. Gutiérrez et al. (2012b) compared ERP responses to ASL sentences with a plausible ending (baseline) with four types of violations created by crossing factors of semantic and phonological (location) relatedness. The resulting critical signs were implausible (hence considered semantic violations) and could be semantically related and share the location of the baseline sign (+s, +p), semantically related but have a different location (+s, −​p), or share the location but be semantically unrelated (−​s, +p). Finally, the critical sign could be both semantically and phonologically unrelated (−​s, −​p). Results showed that all violations elicited an increase in N400 in the 450–​600 ms post-​stimulus onset time window, reflecting the difficulty of semantic integration with sentence context. Interestingly, those conditions that only shared either location or semantic properties (+s, −p and −​s, +p) with the baseline sign elicited an earlier effect (starting at 350 ms after stimulus onset). The existence of differential effects at these two time windows offer support for two different stages of sign processing, with lexical selection effects operating around 350 ms and semantic integration in the neighborhood of 450 ms. Crucially, semantic and phonological features may interact at the level of lexical sign selection in different ways from those observed in spoken languages.

3.3.4  Sign production It is commonly accepted that, from the intention to speak/​sign to the final articulation, information flows through different and separate levels of processing (i.e., conceptual, lexical, and phonological levels; see, e.g., Dell & O’Seaghdha (1991), Caramazza (1997), and Levelt et al. (1999)). The finding of similar linguistic and psycholinguistic phenomena as well as neuroanatomical substrates in both modalities supports the claim that language production follows the same stages in the spoken and signed modalities. One area where similarities in processing have been found are the so-​called ‘tip of the fingers’ and ‘tip of the tongue’ effects (ToFs: Thompson et al. 2005; ToTs: Caramazza 1997). Both refer to the momentary inability to retrieve a known lexical item (Brown 51

52

Eva Gutiérrez-Sigut & Cristina Baus

1991). During a ToF state, analogously to ToT, ASL signers had access to semantic information but only partial access to the form of the intended sign (Thompson et al. 2005). This result supports a separation between semantic and phonological form during sign production. Another piece of evidence in favor of the independence of levels of processing comes from the existence of ‘slips of the tongue’ (Fromkin 1971) and ‘slips of the hand’, both referring to the spontaneous errors occurring during verbal production. Importantly, errors are nonrandom and respect the phonotactic constraints and grammar rules of the language. For speech, vowels interact with vowels and consonants with consonants (e.g., ‘It’s a real mystery’ becomes ‘It’s a meal mystery’), and nouns generally interact with nouns and verbs with verbs. In the signed modality, sign error corpora (ASL: Newkirk et  al. 1980; DGS:  Hohenberger et  al. 2002)  revealed the same patterns of errors, with the majority of errors involving the exchange of one phonological parameter (especially handshape), and only 1% of the errors involving an exchange of the entire sign. The existence of such errors supports the notion that phonology is an independent level, and that phonological parameters constitute the fundamental units composing a sign (see Hohenberger & Leuninger (2012) for a review of sign language production). Using fluency tasks in BSL, Marshall et al. (2014) studied deaf native signers’ responses to semantic and phonological categories. In a fluency task, participants are asked to generate as many items from a semantic category (e.g., animals) or a phonological category (e.g., letter A, I-​handshape) within a given amount of time (usually 1 minute). Marshall et  al. found a larger number of responses for semantic than for phonological tasks, a decline in response rate over time, and rich phonological and semantic clustering. These results parallel findings for spoken languages, suggesting a similar organization of the lexicon of spoken and signed languages. However, results also showed more frequent semantic clusters for handshape and location categories than what is usually found for spoken phonology. This finding indicates that lexical retrieval of signs is modulated by strong links between semantics and phonology. Taken together, these results offer partial support for theories such as the ‘semantic phonology’ theory (Stokoe 1991), which proposes that the phonological form of a sign is primarily derived from aspects of its meaning (also see Section 3.3.5) without discarding the idiosyncrasies of sign language processing. The picture-​sign interference paradigm is one of the most extensively used paradigms to explore language production processes. Studies using this paradigm have provided evidence that concepts are retrieved earlier than their corresponding signs, which, in turn, occurs before the corresponding phonemes are accessed. This points to a process equivalent to that proposed for spoken languages (Levelt et al. 1999). In this paradigm, a picture is presented (simultaneously or sequentially) with a distractor sign that can be semantically or phonologically related to the picture’s corresponding name. Both the semantic interference and the phonological facilitation effects have been reported in the signed modality (e.g., Baus et al. (2008); see also Navarrete et al. (2015) for semantic effects in the cumulative semantic paradigm in both modalities). Corina & Knapp (2006) were the first to provide evidence of a different time course for semantic and phonological effects during signing. As is often found in spoken languages, phonological effects were presented in synchrony or after the picture (SOA 0 and +). However, authors only found semantic effects when semantically related words/​signs preceded the picture. In a later study, in addition to replicating Corina & Knapp’s (2006) results, Baus et al. (2008) found semantic effects at SOA 0. 52

53

Lexical processing

More recent studies have started to explore whether other psycholinguistic variables known to have an effect in spoken language production, such as lexical frequency or neighborhood density, also influence sign production. It is worth reiterating here that –​as there is a lack of databases for psycholinguistic variables of signs – f​ requency values for these studies, for instance, are taken either from a subjective measure of familiarity or from spoken language databases. We acknowledge that this approach is not ideal. Indeed, Johnston (2012) recently questioned the use of subjective familiarity as a measure of sign frequency. From 83 signs explored in Australian Sign Language (Auslan), only 12% of the high-​familiarity signs appeared among the 300 signs of highest frequency in the Auslan Corpus. Consequently, an effort should be made to improve existing databases and create new ones. However, even using a crude estimate of lexical frequency, rather than null results, studies have found results consistent with the spoken language literature. That is, highly familiar (frequent) signs are produced faster and more accurately than less familiar signs (Emmorey et al. 2012, 2013; Baus & Costa 2015). Electrophysiological data has also shown familiarity/​ frequency effects for signs, similar to those typically found for words. Specifically, Baus & Costa (2015) showed for the first time that low-​familiarity signs elicited a larger positivity than high-​familiarity signs in the P200 time window. The P200 component refers to an early positivity indexing the ease with which lexical retrieval occurs, with more difficult to retrieve items showing larger positivities than easier items. Their results largely replicated findings for the spoken modality although the onset of the effect was later for signs than what has been previously described for speech (e.g., Strijkers et al. 2010). Finally, neuroimaging techniques have allowed us to identify the neural circuitry underlying sign production, showing similarities and some differences between modalities. Emmorey et al. (2007) used PET to measure brain activation for object naming in deaf native ASL signers and hearing English speakers. Strikingly, similar activation was observed in classical language areas for both modalities. Furthermore, there was greater activation in the left supramarginal gyrus and the superior parietal lobe for sign than speech production. These differences were linked to phonological encoding and proprioceptive monitoring. In a H215O-​PET study, Braun et al. (2001) studied the spontaneous generation of narratives in ASL and spoken English, produced by hearing native signers. They found greater activation in left superior parietal and paracentral areas for ASL, which was associated with the integration of somatosensory information. It is worth noting that both of these studies attempted to control for purely motoric effects by subtracting a baseline that required vocal or manual movements equivalent to those of speech and sign production. Although this approach is useful for investigating high-​level linguistic processes, it does not allow for the study of lower-​level modality-​dependent features of language production (for a discussion, see Emmorey et al. (2014)). Emmorey et al. (2014) used H215O-​PET data to directly contrast ASL and spoken English production during picture naming in hearing native signers, without removing low-​level motoric effects. Sign production led to greater parietal activation than English, especially in the left hemisphere. This greater activation was attributed to somatosensory and proprioceptive feedback (anterior parietal activation), the voluntary production of motor movements (posterior parietal activation), and the sensorimotor integration necessary for phonological encoding of signs (inferior parietal activation). In sum, although still scarce, studies of sign production have found that levels of sign processing are separate, and their time course is different. In addition, lexical access is sensitive to the lexical properties of signs, such as frequency and neighborhood density. 53

54

Eva Gutiérrez-Sigut & Cristina Baus

Neuropsychological studies have observed activation of the classical left-​lateralized language network. Interestingly, some extra parietal areas seem to be sensitive to the particular characteristics of the signed modality (see, e.g., Corina et al. (2003) and Emmorey et al. (2014) for discussion of specific proprioceptive feedback and phonological encoding requirements of signs). In Section 3.4, we will describe studies exploring the influence of sublexical variables (such as phonological parameters) in the processing of lexical signs, but first we will pay close attention to studies on iconicity.

3.3.5  Iconicity: the link between meaning and form The relationship between the linguistic form of a word (e.g., speech-​sound) and its meaning, whether arbitrary or iconic, has intrigued researchers for decades. Spoken language is primarily conceived as a system of arbitrary symbols. A priori, there is nothing in the sequence of sounds composing a given word –​for example [kæt] –​that encloses information referring to its meaning that we have experience with –​a ‘four-​legged feline mammal’. It is worth noting, however, that spoken languages include a small repertoire of non-​arbitrary form-​meaning mappings that map directly onto our sensory, motor, or emotional experiences of the world, namely onomatopoeias (words that sound like their referents) or sound-​symbolic mappings (certain sounds are able to capture information about the form of an object; cf. the kiki/​bouba effect (Ramachandran & Hubbard 2001)). In sign languages, the presence of non-​arbitrariness or iconicity is much more evident (e.g., Mandel 1977; Taub 2001; Pietrandrea 2002). The visual-​gestural modality, apart from regular arbitrary signs, favors a high degree of transparency between the form and its referent (see, e.g., Klima & Bellugi (1979); and see Armstrong et al. (1995) and Cuxac (2000) for a view that considers signs as essentially iconic). Therefore, there is space for a more direct mapping between the visual properties of our experiences and the phonology of signs. Consider, for instance, the sign S CIS SORS in Catalan Sign Language (LSC): the handshape and movement of the index and middle fingers clearly resemble the action of cutting with scissors. This higher degree of iconicity found in signs offers a unique opportunity to explore the natural link between the form and its conceptual representation of lexical items. Indeed, within sign language, iconicity has generated substantial work trying to determine the linguistic relevance of this property. It is widely assumed that iconicity aids lexical retrieval during sign language learning. Campbell et al. (1992) showed that after a brief exposure to signs, non-​signers were able to correctly identify and recall iconic signs more than non-​iconic signs (see also Lieberth & Gamble 1991; Baus et al. 2012; Ortega & Morgan 2015). Indeed, Thompson et al. (2012) recently suggested that iconicity plays a fundamental role in language development, both in signed and spoken languages. However, beyond language learning, results describing the impact of iconicity in sign language processing have been mixed (see Perniss et al. (2010), for a review). For sign comprehension, there is some evidence in favor of iconicity strengthening the links between meaning and form, aiding semantic processing but having the opposite effect when phonological processing is required (Grote & Linz 2003; Ormel et al. 2009; Thompson et al. 2009, 2010; but see Bosworth & Emmorey (2010)). Using a picture-​sign matching task, Thompson et al. (2009) showed that deaf signers were faster when the presented picture highlighted the same iconic property as the sign (e.g., picture of a bird highlights the beak, which is also the property selected in the sign BIRD ) rather than another property (the wings; see also Grote & Linz 2003; Ormel et al. 2009; Vinson et al. 2015). However, the existence of an automatic activation of semantic features for 54

55

Lexical processing

iconic signs has been challenged by Bosworth & Emmorey (2010), who used a semantic priming task where iconic target signs (PIAN O ) could be preceded by an only semantically related prime (MU S IC ), an iconic and semantically related prime (GUITAR ), or an unrelated prime. Although there was a semantic priming effect, it was not modulated by iconicity. That is, MU S IC and G U ITAR had the same effect over the target sign PIANO . Ortega & Morgan (2015) also failed to find an effect of iconicity in a cross-​modal priming study. Using a phonological task (‘decide whether a sign has one specific handshape’), Thompson et al. (2010) reported slower responses for iconic than for non-​iconic signs. Baus et al. (2014) observed a similar detrimental effect for bimodal bilinguals, who were slower translating iconic than non-​iconic signs into English. Bosworth & Emmorey (2010) argued that the influence of iconicity on sign processing might be strongly affected by other variables related to the items (e.g., number of translations, neighborhood density, AoA) or the task used. The results from sign production –​scarcer than those from comprehension –​are also mixed regarding the role of iconicity. While iconicity does not seem to affect the rate of ToF or anomic states in aphasic patients (Thompson et al. 2005; Marshall et al. 2014), it increases the speed with which signs are retrieved from the lexicon (Baus & Costa 2015; Vinson et  al. 2015). In a picture-​naming task, deaf signers (Vinson et  al. 2015), as well as bimodal bilinguals (Baus & Costa 2015), were faster at signing pictures whose corresponding signs were iconic than non-​iconic (see also Navarrete et al. 2017; Pretato et al. 2018). Additionally, in both cases, iconicity had a greater impact on signs that were more difficult to retrieve from the lexicon –​namely low-​frequency signs and later acquired signs  –​supporting the interaction between iconicity and other lexical properties of the signs, as suggested by Bosworth & Emmorey (2010). Interestingly, together with AoA effects, lexical frequency is often considered a relevant factor for phonological retrieval (e.g., Kittredge et al. 2008). Therefore, this interaction between lexical frequency and iconicity lends support to the existence of a link between meaning and form for iconic signs. In addition to the behavioral results, Baus & Costa (2015) were the first to report data on the temporal course of iconicity for sign production. Importantly, ERP modulations associated to iconicity were present very early in the course of signing. Around 100 ms (N100), iconic signs elicited a larger negativity than non-​iconic signs, indicating an early engagement of the conceptual system for iconic signs. Iconicity also modulated ERPs at a later time window (400–​600 ms; P300), in which phonological retrieval has been hypothesized (see Indefrey & Levelt (2004) for a meta-​analysis on the chronometry of word production). In sum, studies on iconicity in sign comprehension and production show that iconicity plays a modest role in sign processing. It is more apparent when lexical retrieval is more difficult (e.g., when non-​signers are learning a sign language) and interacts with other psycholinguistic variables (e.g., lexical frequency, AoA). Future studies should be aware of these issues and seek information in lexical sign databases (e.g., Caselli et  al. 2016; Gutiérrez-​Sigut et al. 2016).

3.4  Processing of lexical signs: sublexical units 3.4.1  Comprehension It is generally accepted that, similarly to lexical access in spoken languages, signs are decomposed into smaller units which play a role in lexical access (although see Corina & 55

56

Eva Gutiérrez-Sigut & Cristina Baus

Grosvald (2012) for a discussion of signers relying on the configuration of the articulators instead of decomposing the signs into parts). Lexical access to signs is considered a two-​stage process comprised of sublexical and lexical levels (Caselli & Cohen-​Goldberg 2014). Caselli & Cohen-​Goldberg (2014) propose that, once factors specific to sign languages are accounted for, a single mechanism can be responsible for lexical access in both languages. Specifically, their computational model controls for the fact that various parameters seem to have distinct activation timings (locations seem to be activated earlier than handshapes) and that their representation might have different strengths. That is, the representation of location seems to be more robust than that of handshape due to a higher sublexical frequency, perceptual saliency, or amount of attention received. A range of behavioral and neuroimaging studies provide evidence that people use the sublexical structure of signs to organize their lexicon and achieve lexical access. Furthermore, studies have shown that the major phonological parameters (handshape, movement, and location) have a differential status. Although behavioral results are somewhat mixed regarding the direction of the effects, findings often support the idea that location has a prominent status in the early stages of sign processing. Studies investigating signers’ working memory have found that participants remember signed stimuli according to their phonological structure. Klima & Bellugi (1979) found that serial recall was poorer when the signs within a list shared phonological parameters than when they were unrelated. Likewise, signers seem to perform an internal monitoring of phonological parameters during signing (Wilson & Emmorey 1997, 1998). Categorical Perception effects for handshape, but not for location (Emmorey et al. 2003; Baker et al. 2005), also suggest that signers use handshape as a sublexical unit during sign recognition. Gating studies have shown that location, especially of signs in the neutral space, is recognized first, followed by handshape and then movement (Grosjean 1980; Emmorey & Corina 1990). Orfanidou et  al. (2009) found that location and handshape play different roles during BSL sign recognition. Authors studied misperceptions during a sign-​spotting task in which participants spotted signs embedded in nonsense contexts. The smallest number of errors occurred with respect to location, followed by handshape. Using the priming paradigm, Corina & Emmorey (1993) investigated lexical access in ASL during a lexical decision task. Results showed inhibitory priming when targets and primes shared location and facilitation effects for movement overlap. No effects were found when pairs shared handshape. In a later priming study, Corina & Hildebrandt (2002) used a longer interval between the presentation of the prime and target and did not find significant differences, thus not replicating previous findings. The use of a shorter interval, similar to the one used in the original study, led to a non-​significant inhibitory trend for both location and movement. These results replicate the original effect for location but not for movement. Carreiras et  al. (2008) found that location and handshape affected recognition in LSE signs in different ways. In a lexical decision task over single signs, responses to less familiar signs were faster for cases with frequent –​compared to infrequent –​handshapes. On the other hand, the pattern of results for location was the opposite. That is, inhibition was found for signs containing frequent locations. In addition, results from a form-​priming experiment showed that the facilitatory priming effect of handshape affects non-​signs while the inhibitory effect of location affects only signs (Carreiras et al. 2008). A few behavioral studies have assessed the overlapping effects of multiple parameters on lexical access. Corina & Hildebrandt (2002) used a similarity judgment task in which participants were asked to decide which of four non-​signs was more similar to a given 56

57

Lexical processing

target sign. They found that non-​signs sharing location and movement with the target were judged as more similar than those sharing location and handshape or handshape and movement. Similarly, Dye & Shih (2006) reported data from native users of BSL that showed evidence of facilitatory phonological priming for signs that share both location and movement with a previously presented sign. No effects were found in this study for other combinations of parameters or for single parameter overlap. In a similar priming study in LSE, Gutiérrez & Carreiras (2009) found no effects for location and movement overlap, but facilitation for movement and handshape and inhibition for location and handshape overlap. These findings suggest that phonological overlap of more than one parameter may influence processing in different ways (see Meade et al. (2017) for a recent discussion). However, further research is needed to understand the complex interactions that are at play. Finally, Mayberry & Witcher (2005) compared reaction times to target signs preceded by unrelated signs to those preceded by phonological minimal pairs, without explicitly studying the different combinations of parameter overlap. Results showed differences in the use of phonological information during lexical access depending on the age at which sign language was acquired (AoA). Specifically, early signers showed a facilitatory effect of phonological overlap, while late signers showed the opposite pattern. There was no effect for native signers. These findings suggest a more automatic processing of signs for early learners than for late learners, whose lexical decisions might be based on more strategic processes. Gutiérrez et  al. (2012a) used ERPs to investigate the influence and time course of handshape and location overlap during a lexical decision task in native and late signers. Findings indicate that the two parameters under study play different roles in the process of lexical access. While location overlap resulted in a higher amplitude N400 (300–​500 ms window) for both groups of signers, handshape overlap showed a later effect (600–​800 ms) that was restricted to non-​sign targets and native signers. These results suggest that location-​driven lexical competition occurs early on in sign recognition. The effects of handshape occur later and are weaker. Differences in the strength of brain responses to both parameters suggest that handshape is influenced more by factors such as AoA of the sign language, maybe reflecting less efficient processing by late signers. Interestingly, Cardin et al. (2016) found differences in the processing of handshape and location at the perceptual but not the linguistic level. In an fMRI phoneme-​monitoring study, handshape and location engaged the linguistic network similarly but not the perceptual one. Lesion studies of signers have shown severe comprehension difficulties as well as paraphasic signing –​signing with phonological errors –​after lesions that involved Wernicke’s area and the left supramarginal gyrus (SMG; see, e.g., Chiarello et  al. 1982; Poizner et  al. 1987; Corina et  al. 1992). This suggests that sign language comprehension may be more dependent on left-​hemisphere parietal areas, such as the SMG, than speech. In an fMRI experiment investigating rhyme judgment, MacSweeney, Capek et  al. (2008) and MacSweeney, Waters, et  al. (2008) reported activation of a similar left-​lateralized neural network for both English and BSL. Participants were presented two pictures and decided whether the corresponding signs shared location or whether the English words rhymed. Brain activation for both languages was strikingly similar, but not identical. The left parietal lobe was more strongly activated for the BSL than for the English task. A growing number of studies have identified the left parietal lobe as having a greater role in signed than spoken language processing (for a review, see Corina & Knapp (2006); MacSweeney, Capek et  al. (2008); MacSweeney, Waters et  al. (2008)); Corina et  al. 57

58

Eva Gutiérrez-Sigut & Cristina Baus

(2013)). This suggests that phonological encoding of signs might entail the recruitment of a more general system of action observation and production (Corina & Knapp 2006; Corina & Gutiérrez 2016). In a recent study, Corina & Gutiérrez (2016) found that sign recognition might rely on the signer’s internal body representation, especially when processing conditions are not optimal (e.g., having learned ASL later in life). Altogether, these results show that signers engage in phonological decoding during lexical access in a similar way to spoken language users, using a strikingly similar left-​ lateralized brain network. However, we should not assume that language processing is identical for signed and spoken languages. Specific features of signing can shape lexical access in ways that are unique to signing. Differences in the direction and timing of the effects for each of the parameters, in the use of phonological information depending on AoA, and involvement of additional left parietal brain areas during sign comprehension should be taken into account. Remarkably, parietal areas involved in sign comprehension are also active during sign production, which can lead us to consider the notion that these areas are in charge of the action/​motion aspects of signs rather than the linguistic and more abstract aspects.

3.4.2  Production As mentioned above, sign error corpora and picture-​naming paradigms have allowed us to determine that semantic and phonological levels of processing are separate in sign languages. In addition, those studies have provided evidence in favor of a lexical organization according to the phonological properties of signs. As for comprehension, most of the evidence favors the idea that phonological parameters constitute separate units in the lexicon that do not play the same role during sign production. For instance, slips of the hand studies report an uneven distribution of errors (Klima & Bellugi 1979), with more errors involving handshape than location or movement (see also Hohenberger et al. 2002). Additionally, during ToF states (Thompson et al. 2005), despite the momentary inability to retrieve a sign, signers had partial access to its phonological form, mostly concerning handshape and location. Similarly, phonological parameters differed in retrieval accessibility in the fluency task (Marshall et al. 2014). Specifically, movement was less readily retrieved than handshape and location. Furthermore, there were richer semantic clusters for the location and handshape tasks than those found for spoken languages. Picture-​ sign interference tasks have also been employed to explore the role of handshape, location, and movement, during phonological encoding in sign production, although the results are somewhat mixed. Baus et al. (2008) presented deaf signers pictures superimposed with sign distractors that could be phonologically related (on one parameter) or unrelated. The phonological facilitation effect reported in the spoken modality was uniquely observed for sign production when the target picture and the distractor shared the handshape. In contrast, movement did not influence signing, and location showed an interference pattern: Signs sharing location with the distractor sign were produced slower than signs in the unrelated condition. This pattern of results clearly replicates those reported in sign comprehension (e.g., Carreiras et  al. 2008), although the origin of the effects remains unclear. One possibility for the observed interference effect might be that location is semantically mediated. There is evidence of semantic families sharing the same location (e.g., signs referring to feelings tend to be located close to the heart; Cates et al. (2013)), which might modulate the observed effects. However, more research is needed to fully characterize the relationship between phonological 58

59

Lexical processing

parameters and meaning, as it is not yet clear how this modulation might occur (see also van der Hulst & van der Kooij, Chapter 1). One must acknowledge that, using the same picture-​sign interference paradigm, Corina & Knapp (2006) did not observe any effects when one phonological parameter was manipulated. Interestingly, they only observed a facilitatory effect for the combination of two parameters: location and movement. When pictures and distractor-​signs shared these two parameters, signing latencies were reduced compared to the unrelated condition (see also Baus et al. (2014) for the same result when two phonological parameters were manipulated). These results replicate those reported for comprehension (Dye & Shih 2006; Carreiras et al. 2008). Altogether, findings support the phonological privilege of the combination of location and movement, both in comprehension and production (e.g., Sandler 1987, 1989; but see Chinchor 1978; Wilbur 1993). We do not expand more on the syllabic explanation here, as van der Hulst & van der Kooij (Chapter 1) and Fenlon & Brentari (Chapter 4) provide a more detailed theoretical description. Lesion and neuroimaging studies of sign production have shown, in addition to activation of left perysilvian language areas, a greater involvement of the left parietal lobe in signed than spoken language production. This parietal activation has been linked to phonological encoding and proprioceptive monitoring in sign production. Studies of aphasic signers have documented phonological paraphasias and substitution errors involving handshape, location, and movement separately (Poizner et  al. 1987; Corina et al. 1992). While location is often preserved, handshape seems to be the most affected parameter. Corina et al. (1999) tested name generation and repetition of ASL signs and non-​signs during cortical stimulation in a deaf signer that was undergoing brain surgery. Electric stimulation of the left Broca’s area and SMG resulted in phonological errors. Global phonetic reductions (lax fist and rubbing movement in the correct location) were observed after stimulation of Broca’s area, while SMG stimulation led to errors in the selection of specific phonological parameters, especially in handshapes (e.g., substitution of the correct handshape for a pronounceable handshape). Interestingly, stimulation of SMG resulted in errors that the authors attribute to a disruption of integration of semantic and phonological information. In sum, despite phonological encoding proceeding similarly in the spoken and signed modalities, the role that minimal units –​i.e., phonological parameters –​play during comprehension and production seems to be modality-​specific. Mixed results regarding the influence of handshape, location, and movement call for further studies to understand better the reality of phonological units during sign processing.

3.5  Cross-​linguistic influences on sign language processing: bimodal bilingualism On the topic of bilingualism, it is widely assumed that the two languages of bilinguals are active when reading (Dijkstra 2005), listening (Marian & Spivey 2003), and speaking (Kroll et  al. 2006). Indeed, the current dominant view is that bilingual language processing implies a constant interaction between languages (at all levels of processing) even when lexicalizations are restricted to one language. Accordingly, during language processing, words in the language in use are active, as well as their translations. That is, when a Spanish–​ English bilingual is presented with the word table, its translation mesa becomes automatically activated (hereafter, we refer to bilinguals with two spoken languages as unimodal bilinguals). This impossibility to turn off one language 59

60

Eva Gutiérrez-Sigut & Cristina Baus

has generated substantial work trying to address questions such: how do bilinguals’ two languages interact at different levels of processing; how are the two languages organized in the brain; what are the linguistic and cognitive consequences of bilingualism; and how is all this modulated by the age at which the second language is acquired and the level of proficiency attained? In recent years, there has been a growing interest in studying processing in bimodal bilinguals (see Abutalebi & Clahsen (2016); Emmorey et  al. (2016); Thompson & Gutiérrez-​Sigut (2019) for reviews). Bimodal bilinguals are those people who employ two languages in two different modalities, namely spoken and signed. Note that the term frequently refers to hearing people born from deaf parents (children of deaf adults:  CODAs), but hearing signers that learn sign language late (M2L2 bilinguals; Pichler & Koulidobrova 2015) and deaf signers that acquire a written second language have also been considered bimodal bilinguals (e.g., Morford et al. 2011). Importantly, the management of two sensory-​motor systems confers bimodal bilingualism as a unique window to explore the nature of language co-​activation and control mechanisms, as well as their brain organization. Different studies have demonstrated that cross-​linguistic activation is an automatic and unconscious process for bimodal bilinguals. As for speech (see Thierry & Wu 2007), evidence for parallel activation comes mainly from bilinguals being sensitive to properties of the language not in use. Morford et al. (2011) showed an activation of the first language (ASL) during second language processing (written English). In an implicit priming experiment, deaf signers saw written English word-​pairs and decided whether the pairs were semantically related (heart–​brain) or not (paper–​cheese). Importantly, translations into ASL of some of these word-​pairs were phonologically related. Signers were faster in accepting semantically related words when the corresponding ASL signs were phonologically related than when they were phonologically unrelated. In contrast, they were slower in rejecting semantically unrelated words when word-​pair ASL translations were phonologically related (see also Ormel et al. 2012; Kubus et al. 2014). Recently, Villameriel et al. (2016), who tested hearing signers, extended these results by showing that cross-​language interactions are bidirectional, and that the non-​dominant language (L2 sign language) becomes active during processing in the dominant language (L1 spoken language). In addition, bimodal co-​activation was observed regardless of the age at which sign language was acquired, suggesting that co-​activation in bilinguals is a pervasive phenomenon (see also Marian & Shook (2012) and Giezen & Emmorey (2015); but see Williams & Newman (2016) for limitations of cross-​language activation). Neuroimaging studies have shown that sign language experience seems to influence the functional brain network of the first language (Zou et al. 2012a). Brain activation of bimodal bilinguals and monolinguals was compared while subject performed a picture-​ naming task in their L1 (Chinese). While the recruited brain areas were very similar for both groups, bimodal bilinguals showed a greater engagement of the right SMG, the right superior temporal gyrus (STG), and the superior occipital gyrus (SOG). This was interpreted as enhanced non-​linguistic visuospatial abilities after experience with a sign language (see also Zou et al. (2012b) for structural brain changes after sign exposure). Another type of evidence supporting the existence of cross-​language activation in bimodal bilinguals comes from the ability of bimodal bilinguals to simultaneously speak and sign, producing ‘code-​blends’ at a very little cost (Casey & Emmorey 2009; Emmorey et al. 2012; Lillo-​Martin et al. 2016). This ability of bimodal bilinguals is a unique opportunity to test language control mechanisms and the cognitive consequences associated 60

61

Lexical processing

to their constant use. The impossibility of unimodal bilinguals to speak both languages at the same time requires language control mechanisms that restrict lexicalization to the intended language while avoiding interference from the language not in use (Green 1998). Given that bimodal bilinguals are free of this articulatory restriction, the question is whether they apply similar control mechanisms. The relationship between language control mechanisms and general executive control functions has been tested in bimodal bilinguals in several ways. First, studies have explored whether language switching between modalities engages the frontal brain areas linked to control (e.g., dorsolateral prefrontal cortex, anterior cingulated cortex; see Thompson & Gutiérrez-​Sigut (2019) for a recent review). Kovelman et al. (2009) used functional near-​infrared spectroscopy (fNIRS) to compare brain activity of bimodal bilinguals during naming in two linguistic contexts: a monolingual context (lexicalizations restricted to one language) and a bilingual context (sign and speech used simultaneously or in rapid succession). Data showed a larger engagement of the language network for both bilingual contexts than for the monolingual context. However, the lack of extra engagement of frontal areas was interpreted as suggesting that bimodal bilinguals do not need to solve motor-​articulatory competition as unimodal bilinguals do (although see Dias et al. (2017) for a different account). Second, researchers have investigated the bilingual ‘cognitive advantage’ –​the observation that bilinguals suffer less interference than monolinguals in non-​linguistic tasks that require ignoring irrelevant stimuli (e.g., Simon task). The origin of such an advantage has been placed in the constant engagement of control mechanisms during bilingual production to avoid interference from the not-​in-​use language, enhancing cognitive control (Bialystok 2001). Emmorey et al. (2008) tested whether the cognitive advantage extended to bimodal bilinguals by comparing bimodal and unimodal bilinguals with monolinguals in a task requiring executive control. Their results revealed a cognitive advantage for unimodal but not for bimodal bilinguals, over monolinguals. This suggests that the bilingual cognitive advantage arises from the experience of controlling two languages of the same modality. Giezen et al. (2015) also failed to find a cognitive advantage for bimodal bilinguals when compared to monolinguals. However, their results showed a correlation between the linguistic and non-​linguistic performance and control mechanisms involved in controlling competing lexical entries only for bimodal bilinguals, suggesting a partial overlap between language and more general cognitive control mechanisms. Thus, while previous results consistently show that both languages of bimodal bilinguals are active during production and comprehension, results are mixed on whether bimodal bilingualism entails the same language control mechanisms as unimodal bilinguals and whether they are similar to those engaged in more general executive functions. It is important to note, however, that more evidence is needed to determine whether control mechanisms are modulated by the output modality of the two languages. This is especially the case with evidence from the ‘cognitive advantage’, as such a phenomenon seems to be more elusive than previously assumed in the unimodal bilingual literature. Thus, the lack of a ‘cognitive advantage’ might result from other issues (e.g., language use) rather than the modality itself.

3.6  Conclusion To sum up, there are striking similarities between spoken and signed language processing. Results from behavioral studies, as well as evidence of similar neural substrates underlying 61

62

Eva Gutiérrez-Sigut & Cristina Baus

speech and sign processing, have led to the widely accepted view that universal language processing principles can explain lexical access in both signed and spoken languages. However, although psycholinguistic and cognitive mechanisms as well as neural networks underlying speech and sign processing are very similar, they are not identical. We propose that the study of the differences in processing of speech and signs can lead to a more complete picture of human language processing. Acknowledging these differences can also point researchers to factors influencing spoken language processing that might have been under-​researched so far (e.g., study of sound symbolism).

References Abutalebi, Jubin & Harald Clahsen. 2016. Bimodal bilingualism:  Language and cognition. Bilingualism: Language and Cognition 19(2). 221–​222. Armstrong, David F., William C. Stokoe, & Sherman E. Wilcox. 1995. Gesture and the nature of language. Cambridge: Cambridge University Press. Baker, Stephanie A., William J. Idsardi, Roberta Michnick Golinkoff, & Laura-​Ann Petitto. 2005. The perception of handshapes in American Sign Language. Memory & Cognition 33(5). 887–​904. Balota, David A., Maura Pilotti, & Michael J. Cortese. 2001. Subjective frequency estimates for 2,938 monosyllabic words. Memory & Cognition 29(4). 639−647. Baus, Cristina, Manuel Carreiras, & Karen Emmorey. 2012. When does iconicity in sign language matter? Language and Cognitive Processes 28(January 2013). 261–​271. Baus, Cristina & Albert Costa. 2015. On the temporal dynamics of sign production: An ERP study in Catalan Sign Language (LSC). Brain Research 1609(1). 40–​53. Baus, Cristina, Eva Gutiérrez, & Manuel Carreiras. 2014. The role of syllables in sign language production. Frontiers in Psychology 5. Baus, Cristina, Eva Gutiérrez-​Sigut, Josep Quer, & Manuel Carreiras. 2008. Lexical access in Catalan Signed Language (LSC) production. Cognition 108(3). 856–​865. Becker, Curtis A. 1980. Semantic context effects in visual word recognition: An analysis of semantic strategies. Memory & Cognition 8(6). 493–​512. Bellugi, Ursula, Howard Poizner, & Edward S. Klima. 1989. Language, modality and the brain. Trends in Neurosciences 12(10). 380−388. Bentin, Shlomo, Marta Kutas, & Steven A. Hillyard. 1993. Electrophysiological evidence for task effects on semantic priming in auditory word processing. Psychophysiology 30(2), 161−169. Bialystok, Ellen. 2001. Bilingualism in development:  Language, literacy, and cognition. Cambridge: Cambridge University Press. Bosworth, Rain G. & Karen Emmorey. 2010. Effects of iconicity and semantic relatedness on lexical access in American sign language. Journal of Experimental Psychology. Learning, Memory, and Cognition 36(6). 1573–​1581. Braun, Arnoud R., Andre Guillemin, Lara Hosey, & Mary Varga. 2001. The neural organization of discourse:  an H2 15O-​PET study of narrative production in English and American Sign Language. Brain: A Journal of Neurology 124(10). 2028–​2044. Brown, Alan S. 1991. A review of the tip-​of-​the-​tongue experience. Psychological Bulletin 109(2). 204–​223. Campbell, Ruth, Paula Martin, & Theresa White. 1992. Forced choice recognition of sign in novice learners of British sign language. Applied Linguistics 13(2). 185–​201. Capek, Cheryl M., Giordana Grossi, Aaron J. Newman, Susan L. McBurney, David Corina, Brigitte Roeder, & Helen J. Neville. 2009. Brain systems mediating semantic and syntactic processing in deaf native signers: Biological invariance and modality specificity. Proceedings of the National Academy of Sciences 106(21). 8784–​8789. Caramazza, Alfonso. 1997. How many levels of processing are there in lexical access? Cognitive Neuropsychology 14(1). 177–​208. Caramazza, Alfonso, Albert Costa, Michele Miozzo, & Yanchao Bi. 2001. The specific-​word frequency effect: Implications for the representation of homophones in speech production. Journal of Experimental Psychology. Learning, Memory, and Cognition 27(6). 1430–​1450.

62

63

Lexical processing Cardin, Velia, Eleni Orfanidou, Lena Kästner, Jerker Rönnberg, Benice Woll, Cheryl M. Capek, & Mary Rudner. 2016. Monitoring different phonological parameters of sign language engages the same cortical language network but distinctive perceptual ones. Journal of Cognitive Neuroscience 28(1). 20–​40. Cardin, Velia, Eleni Orfanidou, Jerker Rönnberg, Cheryl M. Capek, Mary Rudner, & Benice Woll. 2013. Dissociating cognitive and sensory neural plasticity in human superior temporal cortex. Nature Communications 4. 1473. Carreiras, Manuel. 2010. Sign language processing. Language and Linguistics Compass 4(7). 430–​444. Carreiras, Manuel, Eva Gutiérrez-​Sigut, Silvia Baquero Castellanos, & David P. Corina. 2008. Lexical processing in Spanish Sign Language (LSE). Journal of Memory and Language 58(1). 100–​122. Caselli, Naomi K. & Ariel M. Cohen-​Goldberg. 2014. Lexical access in sign language: A computational model. Frontiers in Psychology 5(May). Caselli, Naomi K., Zed Sevcikova Sehyr, Ariel M. Cohen-​Goldberg, & Karen Emmorey. 2016. ASL-​LEX: A lexical database of American Sign Language. Behavior Research Methods 49(2). 784–​801. Casey, Shannon & Karen Emmorey. 2009. Co-​speech gesture in bimodal bilinguals. Language and Cognitive Processes 24(2). 290–​312. Cates, Deborah, Eva Gutiérrez-​Sigut, Sarah Hafer, Ryan Barrett, & David Corina. 2013. Location, location, location. Sign Language Studies 13(4). 433–​461. Chiarello, Christine, Robert Knight, & Mark Alan Mandel. 1982. Aphasia in a prelingually deaf woman. Brain 105(1). 29–​51. Chinchor, Nancy. 1978. The syllable in ASL. Paper presented at the MIT Sign Language Symposium, Cambridge, MA. Corina, David P. 1999. On the nature of left hemisphere specialization for signed language. Brain and Language 69(2). 230−240. Corina, David P., Yi-​Shiuan Chiu, Heather Patterson Knapp, Ralf R Greenwald, & Allen Braun. 2007. Neural correlates of human action observation in hearing and deaf subjects. Brain Research 1152(1). 111–​129. Corina, David P. & Michael Grosvald. 2012. Exploring perceptual processing of ASL and human actions: Effects of inversion and repetition priming. Cognition 122(3). 330–​345. Corina, David P., Michael Grosvald, & Christian Michel Lachaud. 2011. Perceptual invariance or orientation specificity in American Sign Language? Evidence from repetition priming for signs and gestures. Language and Cognitive Processes 26(8). 1102–​1135. Corina, David P. & Eva Gutiérrez. 2016. Embodiment and American Sign Language: Exploring sensory-​motor influences in the recognition of American Sign Language. Gesture 15(3). 291–​305. Corina, David P. & Ursula Hildebrandt. 2002. Psycholinguistic investigations of phonological structure in American Sign Language. In Richard P. Meier, Kearsy Cormier, & David Quinto-​Pozos (eds.), Modality and structure in signed and spoken languages, 88–​111. Cambridge: Cambridge University Press. Corina, David P.,  Lucia San José-​Robertson, Andre Guillemin, Julia High, & Allen R.  Braun. 2003. Language lateralization in a bimanual language. Journal of Cognitive Neuroscience, 15(5). 718–​730. Corina, David P. & Heather Knapp. 2006. Sign language processing and the mirror neuron system. Cortex 42(4). 529–​539. Corina, David P., Laurel A. Lawyer, & Deborah Cates. 2013. Cross-​linguistic differences in the neural representation of human language: Evidence from users of signed languages. Frontiers in Psychology 3. 587. Corina, David P. & Susan L. McBurney. 2001. The neural representation of language in users of American Sign Language. Journal of Communication Disorders 34(6). 455–​471. Corina, David P., Susan L. McBurney, Carl Dodrill, Kevin Hinshaw, Jim Brinkley, & George Ojemann. 1999. Functional roles of Broca’s area and SMG: Evidence from cortical stimulation mapping in a deaf signer. Neuroimage 10(5). 570–​581. Corina, David P., Howard Poizner, Ursula Bellugi, Todd Feinberg, Dorothy Dowd, & Lucinda O’Grady-​Batch. 1992. Dissociation between linguistic and nonlinguistic gestural systems:  A case for compositionality. Brain and Language 43(3). 414–​447.

63

64

Eva Gutiérrez-Sigut & Cristina Baus Cuxac, Christian. 2000. La Langue des Signes Française (LSF). Les voies de l’iconicité. Paris: Ophrys. Damasio, Antonio, Ursula Bellugi, Hanna Damasio, Howard Poizner, & John Van Gilder. 1986. Sign language aphasia during left-​hemisphere Amytal injection. Nature 322(6077). 363. de Groot, Annette M.B. 1983. The range of automatic spreading activation in word priming. Journal of Verbal Learning and Verbal Behavior 22(4). 417–​436. Dell, Gary S. & Padraig G. O’Seaghdha. 1991. Mediated and convergent lexical priming in language production: A comment on Levelt et al. (1991). Psychological Review 98(4). 604–​618. Dias, Patricia, Saul Villameriel, Marcel R. Giezen, Brendan Costello, & Manuel Carreiras. 2017. Language switching across modalities:  Evidence from bimodal bilinguals. Journal of Experimental Psychology: Learning, Memory, and Cognition 43(11). 1828. Dijkstra, Ton. 2005. Bilingual visual word recognition and lexical access. In Judith F. Kroll & Annette M.B. de Groot (eds.), Handbook of bilingualism: psycholinguistic approaches, 179–​201. New York: Oxford University Press. Dye, Matthew W.G., Jenessa Seymour, & Peter C. Hauser. 2016. Response bias reveals enhanced attention to inferior visual field in signers of American Sign Language. Experimental Brain Research 234(4). 1067–​1076. Dye, Matthew W.G. & Shui-​ I Shih. 2006. Phonological priming in British Sign Language. Laboratory Phonology 8: Varieties of Phonological Competence. 243–​263. Emmorey, Karen. 1991. Repetition priming with aspect and agreement morphology in American Sign Language. Journal of Psycholinguistic Research 20(5). 365–​388. Emmorey, Karen. 1995. Processing the dynamic visual-​spatial morphology of signed languages. In Laurie Beth Feldman (ed.), Morphological aspects of language processing:  Crosslinguistic perspectives, 29–​54. Mahwah, NJ: Lawrence Erlbaum. Emmorey, Karen. 2002. The effects of modality on spatial language: How signers and speakers talk about space. In Richard P. Meier, Kearsy A. Cormier, & David G. Quinto-​Pozos (eds.), Modality and structure in signed and spoken languages, 405–​421 Cambridge: Cambridge University Press. Emmorey, Karen & David P. Corina. 1990. Lexical recognition in sign language: Effects of phonetic structure and morphology. Perceptual and Motor Skills 71. 1227–​1252. Emmorey, Karen & David P. Corina. 1993. Hemispheric specialization for ASL signs and English words: Differences between imageable and abstract forms. Neuropsychologia 31(7). 645–​653. Emmorey, Karen, Helsa B. Borinstein, & Robin L. Thompson. 2005. Bimodal bilingualism: Code-​ blending between spoken English and American Sign Language. In ISB4: Proceedings of the 4th International Symposium on Bilingualism. 663–​673. Emmorey, Karen, Marcel R. Giezen, & Tamar H. Gollan. 2016. Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. Bilingualism:  Language and Cognition 19(2). 223–​242. Emmorey, Karen, Gigi Luk, Jennie Pyers, & Ellen Bialystok. 2008. The source of enhanced cognitive control in bilinguals:  Evidence from bimodal bilinguals. Psychological Science 19(12). 1201–​1206. Emmorey, Karen, Stephen McCullough, & Diane Brentari. 2003. Categorical perception in American Sign Language. Language and Cognitive Processes 18(1). 21–​45. Emmorey, Karen, Stephen McCullough, Sonya Mehta, & Thomas J. Grabowski. 2014. How sensory-​motor systems impact the neural organization for language: Direct contrasts between spoken and signed language. Frontiers in Psychology 5. 484. Emmorey, Karen, Sonya Mehta, & Thomas J. Grabowski. 2007. The neural correlates of sign versus word production. NeuroImage 36(1). 202–​208. Emmorey, Karen, Jennifer A.F. Petrich, & Tamar H. Gollan. 2012. Bilingual processing of ASL-​ English codeblends: The consequences of accessing two lexical representations simultaneously. Journal of Memory and Language 67(1). 199–​210. Emmorey, Karen, Jennifer A.F. Petrich, & Tamar H. Gollan. 2013. Bimodal bilingualism and the frequency-​lag hypothesis. Journal of Deaf Studies and Deaf Education 18(1). 1–​11. Emmorey, Karen, Jiang Xu, Patrick J. Gannon, Susan Goldin-​Meadow, & Allen Braun. 2010. CNS activation and regional connectivity during pantomime observation: No engagement of the mirror neuron system for deaf signers. NeuroImage 49(1). 994–​1005. Fenlon, Jordan, Adam Schembri, Ramas Rentelis, David Vinson, & Kearsy Cormier. 2014. Using conversational data to determine lexical frequency in British Sign Language: The influence of text type. Lingua 143. 187–​202.

64

65

Lexical processing Ferjan Ramírez, Naja, Matthew K. Leonard, Tristan S. Davenport, Christina Torres, Eric Halgren, & Rachel I. Mayberry. 2016. Neural language processing in adolescent first-​language learners:  Longitudinal case studies in American Sign Language. Cerebral Cortex 26(3). 1015–​1026. Fromkin, Victoria A. 1971. The non-​ anomalous nature of anomalous utterances. Language 47(1).  27–​52. Gernsbacher, Morton A. 1984. Resolving 20  years of inconsistent interactions between lexical familiarity and orthography, concreteness, and polysemy. Journal of Experimental Psychology: General 113(2). 256. Giezen, Marcel R., Henrike K. Blumenfeld, Anthony Shook, Viorica Marian, & Karen Emmorey. 2015. Parallel language activation and inhibitory control in bimodal bilinguals. Cognition 141.  9–​25. Giezen, Marcel R. & Karen Emmorey. 2015. Language co-​activation and lexical selection in bimodal bilinguals:  Evidence from picture–​ word interference. Bilingualism:  Language and Cognition 19(2). 264–​276. Goldin-​Meadow, Susan & Diane Brentari. 2015. Gesture, sign and language: The coming of age of sign language and gesture studies. Behavioral and Brain Sciences 40. e46. Green, David W. 1998. Mental control of the bilingual lexico-​semantic system. Bilingualism: Language and Cognition 1(2). 67–​81. Grosjean, François. 1980. Spoken word recognition processes and the gating paradigm. Perception & Psychophysics 28(4). 267–​283. Grosvald, Michael, Eva Gutiérrez, Sarah Hafer, & David Corina. 2012. Dissociating linguistic and non-​linguistic gesture processing: Electrophysiological evidence from American Sign Language. Brain and Language 121(1). 12–​24. Grote, Klaudia & Erika Linz. 2003. The influence of sign language iconicity on semantic conceptualization. In Wolfgang G. Müller & Olga Fischer (eds.), From sign to signing, 23–​40. Amsterdam: John Benjamins. Gutiérrez, Eva & Manuel Carreiras. 2009. El papel de los parámetros fonológicos en el procesamiento de los signos de la lengua de signos española. Fundación CNSE: Madrid, Spain. Gutiérrez, Eva, Oliver Müller, Cristina Baus, & Manuel Carreiras. 2012. Electrophysiological evidence for phonological priming in Spanish Sign Language lexical access. Neuropsychologia 50(7). 1335–​1346. Gutiérrez, Eva, Deborah Williams, Michael Grosvald, & David Corina. 2012. Lexical access in American Sign Language: An ERP investigation of effects of semantics and phonology. Brain Research 1468. 63–​83. Gutiérrez-​Sigut, Eva, Brendan Costello, Cristina Baus, & Manuel Carreiras. 2016. LSE-​Sign:  A lexical database for Spanish Sign Language. Behavior Research Methods 48. 123–​137. Gutiérrez-​Sigut, Eva, Richard Daws, Heather Payne, Jonathan Blott, Chloë Marshall, & Mairéad MacSweeney. 2015. Language lateralization of hearing native signers: A functional transcranial Doppler sonography (fTCD) study of speech and sign production. Brain and Language 151.  23–​34. Gutiérrez-​Sigut, Eva, Heather Payne, & Mairéad MacSweeney. 2015. Investigating language lateralization during phonological and semantic fluency tasks using functional transcranial Doppler sonography. Laterality 20(1). 49–​68. Hänel-​Faulhaber, Barbara, Nils Skotara, Monique Kügow, Uta Salden, Davide Bottari, & Brigitte Röder. 2014. ERP correlates of German Sign Language processing in deaf native signers. BMC Neuroscience 15(1). 62. Hohenberger, Annette, Daniela Happ, & Helen Leuninger. 2002. Modality-​dependent aspects of sign language production: Evidence from slips of the hands and their repairs in German Sign Language. In Richard P. Meier, Kearsy A. Cormier, & David G. Quinto-​Pozos (eds.), Modality and structure in signed and spoken languages. 112–​142. Cambridge: Cambridge University Press. Hohenberger, Annette & Helen Leuninger. 2012. Production. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook (HSK –​Handbooks of Linguistics and Communication Science), 711–​738. Berlin: De Gruyter Mouton. Holcomb, Phillip J. & Helen J. Neville. 1990. Auditory and visual semantic priming in lexical decision: A comparison using event-​related brain potentials. Language and Cognitive Processes 5(4). 281–​312.

65

66

Eva Gutiérrez-Sigut & Cristina Baus Hosemann, Jana, Annika Herrmann, Markus Steinbach, Ina Bornkessel-​Schlesewsky, & Matthias Schlesewsky. 2013. Lexical prediction via forward models: N400 evidence from German Sign Language. Neuropsychologia 51(11). 2224–​2237. Indefrey, Peter & Willem J.M. Levelt. 2004. The spatial and temporal signatures of word production components. Cognition 92(1–​2). 101–​144. Johnston, Trevor. 2012. Lexical frequency in sign languages. Journal of Deaf Studies and Deaf Education 17(2). 163–​193. Kittredge, Audrey K., Gary S. Dell, Jay Verkuilen, & Myrna F. Schwartz. 2008. Where is the effect of frequency in word production? Insights from aphasic picture-​naming errors. Cognitive Neuropsychology 25(4). 463–​492. Klima, Edward S. & Ursula Bellugi. 1979. The signs of language. Cambridge, MA:  Harvard University Press. Kovelman, Ioulia, Mark H. Shalinsky, Katherine S. White, Shawn N. Schmitt, Melody S. Berens, Nora Paymer, & Laura-​Ann Petitto. 2009. Dual language use in sign-​speech bimodal bilinguals: fNIRS brain-​imaging evidence. Brain and Language 109(2–​3). 112–​123. Kroll, Judith F., Susan C. Bobb, & Zofia Wodniecka. 2006. Language selectivity is the exception, not the rule: Arguments against a fixed locus of language selection in bilingual speech. Bilingualism: Language and Cognition 9(2). 119–​135. Kubus, Okan, Agnes Villwock, Jill P. Morford, & Christian Rathmann. 2014. Word recognition in deaf readers :  Cross-​language activation of German Sign Language and German. Applied Psycholinguistics 36(4). 831–​854. Kučera, Henry & W. Nelson Francis. 1967. Computational analysis of present-​day American English. Sudbury: Dartmouth Publishing Group. Kutas, Marta & Steven A. Hillyard. 1980. Event-​related brain potentials to semantically inappropriate and surprisingly large words. Biological Psychology 11(2). 99–​116. Leonard, Matthew K., Naja Ferjan Ramírez, Christina Torres, Katherine E. Travis, Marla Hatrak, Rachel I. Mayberry, & Eric Halgren. 2012. Signed words in the congenitally deaf evoke typical late lexicosemantic responses with no early visual responses in left superior temporal cortex. Journal of Neuroscience 32(28). 9700–​9705. Levelt, Willem J.M., Ardi Roelofs, & Antje S. Meyer. 1999. A theory of lexical access in speech production. Behavioral and Brain Sciences 22(1). 1–​75. Lieberth, Ann K. & Mary Ellen Bellile Gamble. 1991. The role of iconicity in sign language learning by hearing adults. Journal of Communication Disorders 24(2). 89–​99. Lillo-​Martin, Diane, Ronice M. de Quadros, & Deborah Chen Pichler. 2016. The development of bimodal bilingualism: Implications for linguistic theory. Linguistic Approaches to Bilingualism 6(6). 719–​755. MacSweeney, Mairéad, Ruth Campbell, Bencie Woll, Vincent Giampietro, Anthony S. David, Philip K. McGuire, Gemma A. Calvert, & Michael J. Brammer. 2004. Dissociating linguistic and nonlinguistic gestural communication in the brain. NeuroImage 22(4). 1605–​1618. MacSweeney, Mairéad, Cheryl M. Capek, Ruth Campbell, & Bencie Woll. 2008. The signing brain: The neurobiology of sign language. Trends in Cognitive Sciences 12(11). 432–​440. MacSweeney Mairéad, Dafydd Waters, Michael J. Brammer, Bencie Woll, & Usha Goswami. 2008. Phonological processing in deaf signers and the impact of age of first language acquisition. NeuroImage 40(3). 1369–​1379. Mandel, Mark. 1977. Iconicity of signs and their learnability by non-​signers. Proceedings of the First National Symposium on Sign Language Research and Teaching. 259–​266. Marian, Victoria & Anthony Shook. 2012. The cognitive benefits of being bilingual. Cerebrum: The Dana Forum on Brain Science.  1–​13. Marian, Victoria & Michael Spivey. 2003. Bilingual and monolingual processing of competing lexical items. Applied Psycholinguistics 24(2). 173–​193. Marshall, Chloë, Jo Atkinson, Elaine Smulovitch, Alice Thacker, & Bencie Woll. 2004. Aphasia in a user of British Sign Language: Dissociation between sign and gesture. Cognitive Neuropsychology 21(5). 537–​554. Marshall, Chloë, Katherine Rowley, & Joanna Atkinson. 2014. Modality-​ dependent and -​independent factors in the organisation of the signed language lexicon: Insights from semantic and phonological fluency tasks in BSL. Journal of Psycholinguistic Research 43(5). 587–​610.

66

67

Lexical processing Mayberry, Rachel I. & Pamela Witcher. 2005. Age of acquisition effects on lexical access in ASL:  Evidence for the psychological reality of phonological processing in sign language. In Proceedings of the 30th Boston University Conference on Language Development. Meade, Gabriela, Katherine J. Midgley, Zed Sevcikova Sehyr, Phillip J. Holcomb, & Karen Emmorey. 2017. Implicit co-​activation of American Sign Language in deaf readers: An ERP study. Brain & Language 170. 50–​61. Meyer, David E. & Roger W. Schvaneveldt. 1971. Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology 90(2). 227–​234. Morford, Jill P., Erin Wilkinson, Agnes Villwock, Pilar Piñar, & Judith F. Kroll. 2011. When deaf signers read English: Do written words activate their sign translations? Cognition 118(2). 286–​292. Navarrete, Eduardo, Arianna Caccaro, Francesco Pavani, Bradford Z. Mahon, & Francesca Peressotti. 2015. With or without semantic mediation:  Retrieval of lexical representations in sign production. Journal of Deaf Studies and Deaf Education 20(2). 163–​171. Navarrete, Eduardo, Francesca Peressotti, Luigi Lerose, & Michele Miozzo. 2017. Activation cascading in sign production. Journal of Experimental Psychology:  Learning, Memory, and Cognition 43(2). 302–​318. Neely, James H. 1977. Semantic priming and retrieval from lexical memory: Roles of inhibitionless spreading activation and limited-​capacity attention. Journal of Experimental Psychology: General 106(3). 226–​254. Neville, Helen J., Daphne Bavelier, David Corina, Josef Rauschecker, Avi Karni, Anil Lalwani, Allen Braun, Vince Clark, Peter Jezzard, & Robert Turner. 1998. Cerebral organization for language in deaf and hearing subjects: Biological constraints and effects of experience. Proceedings of the National Academy of Sciences of the United States of America 95(3). 922–​929. Neville, Helen J., Sharon A. Coffey, Donald S. Lawson, Andrew Fischer, Karen Emmorey, & Ursula Bellugi. 1997. Neural systems mediating American Sign Language: Effects of sensory experience and age of acquisition. Brain and Language 57(3). 285–​308. Newkirk, Don, Edward S. Klima, Pedersen, Charlene C., & Ursula Bellugi. 1980. Linguistic evidence from slips of the hand. In Victoria A. Fromkin (ed.), Errors in linguistic performance: Slips of the tongue, ear, pen, and hand, 165–​197. New York: Academic Press. Newman, Aaron J., Daphne Bavelier, David Corina, Peter Jezzard & Helen J. Neville. 2002. A critical period for right hemisphere recruitment in American Sign Language processing. Nature Neuroscience 5(1). 76–​80. Newman, Aaron J., Ted Supalla, Nina Fernández, Elissa L. Newport, & Daphne Bavelier. 2015. Neural systems supporting linguistic structure, linguistic experience, and symbolic communication in sign language and gesture. Proceedings of the National Academy of Sciences 112(37). 11684–​11689. Orfanidou, Eleni, Robert Adam, James M. McQueen, & Gary Morgan. 2009. Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition. Memory & Cognition 37(3). 302–​315. Ormel, Ellen, Daan Hermans, Harry Knoors, & Ludo Verhoeven. 2009. The role of sign phonology and iconicity during sign processing:  The case of deaf children. Journal of Deaf Studies and Deaf Education 14(4). 436–​448. Ormel, Ellen, Daan Hermans, Ludo Verhoeven, & Harry Knoors. 2012. Cross-​language effects in written word recognition: The case of bilingual deaf children. Bilingualism: Language and Cognition 15(2). 288–​303. Ortega, Gerardo & Gary Morgan. 2015. Phonological development in hearing learners of a sign language: The influence of phonological parameters, sign complexity, and iconicity. Language Learning 65(3). 660–​688. Perniss, Pamela, Robin L. Thompson, & Gabriella Vigliocco. 2010. Iconicity as a general property of language: Evidence from spoken and signed languages. Frontiers in Psychology 1. 227. Pichler, Deborah Chen & Elena Koulidobrova. 2015. Acquisition of sign language as a second language. In Marc Marschark & Patricia E. Spencer (eds.), The Oxford handbook of deaf studies in language, 218–​230. Oxford University Press. Pietrandrea, Paola. 2002. Iconicity and arbitrariness in Italian Sign Language. Sign Language Studies 2(3). 296–​321.

67

68

Eva Gutiérrez-Sigut & Cristina Baus Poizner, Howard, Edward S. Klima, & Ursula Bellugi. 1987. What the hands reveal about the brain. Cambridge, MA: MIT Press. Pretato, Elena, Francesca Peressotti, Carmela Bertone, & Eduardo Navarrete. 2018. The iconicity advantage in sign production: The case of bimodal bilinguals. Second Language Research 34(4). 449–​462. Ramachandran, Vilayanur S. & Edward M. Hubbard. 2001. Synaesthesia –​a window into perception, thought and language. Journal of Consciousness Studies 8(12). 3–​34. Sandler, Wendy. 1987. Assimilation and feature hierarchy in American Sign Language. Chicago Linguistic Society 23. 266–​278. Sandler, Wendy. 1989. Phonological representation of the sign: Linearity and nonlinearity in American Sign Language. Dordrecht: Foris. Stokoe, William C. 1991. Semantic phonology. Sign Language Studies 71(1). 107−114. Strijkers, Kristof, Albert Costa, & Guillaume Thierry. 2010. Tracking lexical access in speech production: Electrophysiological correlates of word frequency and cognate effects. Cerebral Cortex 20(4). 912–​928. Taub, Sarah F. 2001. Language from the body: Iconicity and metaphor in American Sign Language. Cambridge: Cambridge University Press. Thierry, Guillaume & Yan Jing Wu. 2007. Brain potentials reveal unconscious translation during foreign-​language comprehension. Proceedings of the National Academy of Sciences 104(30). 12530–​12535. Thompson, Robin L., Karen Emmorey, & Tamar H. Gollan. 2005. ‘Tip of the fingers’ experiences by deaf signers: Insights into the organization of a sign-​based lexicon. Psychological Science 16(11). 856–​860. Thompson, Robin L. & Eva Gutiérrez-​Sigut. 2019. Speech-sign bilingualism:  A unique window into the multilingual brain. In John W. Schwieter & Michel Paradis (eds.), The handbook of the neuroscience of multilingualism, 754–​783. London: Wiley-​Blackwell. Thompson, Robin L., David P. Vinson, & Gabriella Vigliocco. 2009. The link between form and meaning in American Sign Language:  Lexical processing effects. Journal of Experimental Psychology; Learning, Memory, and Cognition 35(2). 550–​557. Thompson, Robin L., Davind P. Vinson, & Gabriella Vigliocco. 2010. The link between form and meaning in British sign language:  Effects of iconicity for phonological decisions. Journal of Experimental Psychology: Learning, Memory, and Cognition 36(4). 1017–​1027. Thompson, Robin L., David P. Vinson, Bencie Woll, & Gabriella Vigliocco. 2012. The road to language learning is iconic:  Evidence from British Sign Language. Psychological Science 23(12). 1443–​1448. Van Petten, Cyma & Maria Kutas. 1990. Interactions between sentence context and word frequency in event-​related brain potentials. Memory & Cognition 18(4). 380–​393. Van Petten, Cyma & Maria Kutas. 1991. Electrophysiological evidence for the flexibility of lexical processing. In Greg B. Simpson (ed.), Understanding word and sentence, 129–​174. Amsterdam: Elsevier. Villameriel, Saúl, Patricia Dias, Brendan Costello, & Manuel Carreiras. 2016. Cross-​language and cross-​modal activation in hearing bimodal bilinguals. Journal of Memory and Language 87.  59–​70. Vinson, David, Robin L. Thompson, Robert Skinner, & Gabriella Vigliocco. 2015. A faster path between meaning and form? Iconicity facilitates sign recognition and production in British Sign Language. Journal of Memory and Language 82. 56–​85. Wilbur, Ronnie B. 1993. Syllables and segments:  Hold the movement and move the holds. In Geoffrey R. Coulter (ed.), Phonetics and phonology, vol 3:  Current issues in ASL phonology, 135–​166. San Diego, CA: Academic Press. Williams, Joshua T. & Sharlene D. Newman. 2016. Spoken language activation alters subsequent sign language activation in L2 learners of American Sign Language. Journal of Psycholinguistic Research 46(1). 211–​225. Wilson, Margaret & Karen Emmorey. 1997. A visuospatial “phonological loop” in working memory: Evidence from American Sign Language. Memory & Cognition 25(3). 313–​320. Wilson, Margaret & Karen Emmorey. 1998. A “word length effect” for sign language:  Further evidence for the role of language in structuring working memory. Memory & Cognition 26(3). 584–​590.

68

69

Lexical processing Zou, Lijuan, Jubin Abutalebi, Benjamin Zinszer, Xin Yan, Hua Shu, Danling Peng, & Guosheng Ding. 2012a. Second language experience modulates functional brain network for the native language production in bimodal bilinguals. NeuroImage 62(3). 1367–​1375. Zou, Lijuan, Guosheng Ding, Jubin Abutalebi, Hua Shu, & Danling Peng. 2012b. Structural plasticity of the left caudate in bimodal bilinguals. Cortex 48(9). 1197–​1206.

69

70

4 PROSODY Theoretical and experimental perspectives Jordan Fenlon & Diane Brentari

4.1  Introduction Prosody is the study of suprasegmental features in language. For spoken languages, these features include pitch, length, loudness, and rhythm. These features can be used to effectively mark lexical items in focus, indicate whether the underlying function of an utterance is declarative or interrogative, or to indicate boundaries in speech (Cruttenden 1995). They also play an important role in facilitating communication: for example, speakers use prosody to disambiguate between sentence meanings (Snedeker & Trueswell 2003) and to convey when they are about to finish speaking (Geluykens & Swerts 1994). Listeners are very sensitive to these prosodic cues produced while speaking, and studies have shown that attending to these cues can make a significant contribution to comprehension (see Cutler et al. (1997) for an overview). Although sign languages are conveyed in the visual-​gestural modality, it is clear that they have a prosodic system that is similar to spoken languages in function but quite a bit different in form (Nespor & Sandler 1999; Wilbur 1999b, 2000; Sandler & Lillo-​ Martin 2006; Pfau & Quer 2010; Sandler 2012). That is, sign language utterances can be structured into prosodic constituents that are marked by a number of manual and non-​ manual features, and these features play a similar role as has been documented for prosodic markers in spoken languages. In this chapter, we describe prosody as it relates to sign languages. In doing so, we will draw upon a wide range of studies encompassing theoretical and experimental approaches and focusing on several (unrelated) sign languages. We conclude this chapter with a brief discussion on sign language prosody and audio-​visual prosody since, more recently, studies have highlighted the importance of the face and body in the production and comprehension of prosody in spoken languages (e.g., Krahmer & Swerts 2007; Borràs-​Comes & Prieto 2011). Such studies have important implications for theoretical descriptions of prosody with regard to sign languages.

4.2  Theoretical description A range of manual and non-​manual markers have been identified as relevant when describing sign language prosody. These include non-​manual markers such as the brows 70

71

Prosody

(Nespor & Sandler 1999; Dachkovsky & Sandler 2009), blinks (Wilbur 1994; Sze 2008), head nods (Nespor & Sandler 1999), the lower face (Brentari & Crossley 2002), and the torso (Boyes Braem 1999). Manual markers include lengthening/​holds (Nespor & Sandler 1999), pauses (Grosjean & Lane 1977), and transitions (Duarte 2012). These markers each make a contribution to the prosodic structure of a sentence. Generally speaking, non-​manual markers contribute to intonation (i.e., they add semantic/​pragmatic meaning), while manual prosodic cues are for marking constituent boundaries (Sandler 2012; Brentari, Falk, & Wolford 2015). It is important to note that, although we describe the prosodic function of a variety of markers in sign languages, they are not limited to such functions. Pfau & Quer (2010) provide a cross-​linguistic overview of non-​manual markers in sign languages and their various grammatical and prosodic roles. They describe how non-​manuals can play a role at the phonological (e.g., some signs are lexically specified for a non-​manual feature, and these can be contrastive although examples are not frequent) and morphosyntactic (e.g., the use of headshake to mark negation) level. In some cases, grammatical and prosodic interpretations of a specific marker appear to be in conflict with one another. This issue is discussed in detail in Section 4.2.6.

4.2.1  The prosodic hierarchy Much of sign language prosody has been described with reference to theories and work associated with Prosodic Phonology (Nespor & Vogel 1986). This theory outlines how a speech stream can be divided into prosodic constituents, which are hierarchically organized. This prosodic hierarchy, as outlined by Nespor & Vogel (1986), is generally set out as in (1). (1)

mora < syllable < foot < prosodic word < clitic group < phonological phrase < intonational phrase < phonological utterance

The smallest prosodic constituent is the mora, and the largest constituent is the phonological utterance. A  prosodic constituent at one level can be divided into prosodic constituents at the next level below it without skipping levels. This means that a phonological utterance consists of one or more intonational phrases and that these phrases, in turn, consist of one or more phonological phrases (this is known as the Strict Layer Hypothesis). This hierarchy of prosodic constituents was developed after observing that morphosyntactic constituents are an insufficient point of reference when describing phonological rules. Nespor & Vogel (1986) demonstrate that each level of the prosodic hierarchy serves as the domain of application for specific phonological rules and phonetic processes and that these domains are not isomorphic with the morphosyntactic constituent (i.e., they are independent from syntax). This prosodic hierarchy has been applied to sign languages, and likewise, the morphosyntactic and prosodic levels have been shown to be related, but non-​isomorphic (Brentari 1998; Sandler & Lillo-​Martin 2006). Beginning with the smallest units, the mora and the syllable, a description of the sign language prosody and how each unit in (1) is attested is provided in the following sections.

71

72

Jordan Fenlon & Diane Brentari

4.2.2  The syllable and mora The syllable in sign languages is defined with reference to movement (see also van der Hulst & van der Kooij, Chapter 1). For example, one path movement is considered to be equivalent to a syllable in sign language (Brentari 1998). Although most signs are monosyllabic, there are disyllabic monomorphemic signs, such as ASL DESTROY , which show that the syllable and morpheme are not isomorphic (Brentari 1998). Generally, in several approaches to phonological representation, movement has been described as analogous to vowels in spoken languages (e.g., the Hold-​Movement model (Liddell & Johnson 1989), the Hand Tier Model (Sandler 1989), and the Prosodic Model (Brentari 1998)). Parallels between the two can be seen when one considers that vowels and movements are perceptually the most salient feature within a word or a sign, and that they function as the medium by which words and signs are made more audible and visible respectively (Brentari 2002). In fact, researchers have proposed that more visually salient movements are more sonorous; therefore, wiggling the fingers is less ‘sonorant’ than twisting of the radial/​ulnar joint (forearm), which in turn is less ‘sonorant’ than a path movement (Corina 1990; Perlmutter 1992; Brentari 1993; Sandler 1993). Additionally, fingerspelled letters or number signs produced in stasis have been observed to add an epenthetic movement in some sign languages when used as an independent word (Brentari 1990; Geraci 2009). Just as in spoken languages where an operation of vowel epenthesis ensures that a syllable is well-​formed, movement is inserted where necessary to ensure that the signed output is a well-​formed syllable (Brentari 1990). These parallels suggest that movement plays a central organizing role at the phonological level forming a unit similar to the syllable nucleus in spoken languages. Several proposals have been made with regard to the internal structure of the syllable. The differences between these proposals stem from opposing views regarding the role of movement within the phonological representation. Some models argue for a sequential representation where movement is represented as a segment (e.g., Liddell & Johnson 1989; Sandler 1989; Perlmutter 1992) while other models argue for a simultaneous representation where movement (or components of movement) is represented as an autosegment (e.g., Brentari 1990, 1998; van der Hulst 1993). Within sequential models, signs consist of sequences of static and dynamic segments (e.g., LML in the Hand Tier Model). Since dynamic movement segments are understood to be more sonorous than static segments, parallels can be drawn with the internal structure of syllables in spoken languages, which consist of an onset and a rime with the latter being further divided into a nucleus and a coda. In such sequential models of sign language structure, the internal structure is organized around a sonority peak so that the most sonorous element occupies the nucleus and less sonorous elements occupy neighboring segments (similar to the Sonority Sequencing Principle in the spoken language syllable). The phonological rule known as ‘phrase-​final lengthening’ makes reference to, and is argued as evidence for, this sequential structure. In ASL, phrase-​final lengthening is accounted for by a phonological rule of mora-​insertion, where lengthening is applied not to the syllable nucleus (movement) but to the final segment (place of articulation, or location) in some signs (Perlmutter 1992). See, however, Tyrone et al. (2010), who argue using motion-​capture data that the entire final syllable is longer in phrase-​final lengthening, not only the final segment or mora, suggesting that the syllable, rather than the mora, is the unit targeted for this lengthening operation. Within simultaneous representations such as the Prosodic Model, movement, handshape and location, are treated as autosegments that are simultaneously layered in the model’s internal organization (Brentari 1998). Sequential segments do not have a central role in this 72

73

Prosody

representation and are instead derived from the type of movement specified within a sign (represented as timing segments in the terminal nodes within the prosodic features branch). Crucially, this implies that the internal organization of a syllable in sign languages is fundamentally different from syllables in spoken languages (e.g., segments are not organized around a Sonority Sequencing Principle). Brentari (1998) demonstrates how this simultaneous organization is referred to in a morphological process from which nominal forms are derived (see also Abner, Chapter 10). Signs containing one movement element (e.g., a movement generated by the hand, wrist, or elbow, such as ASL SIT ) are permitted to undergo modifications (e.g., the path movement in ASL SIT is repeated to derive the nominal form CHAIR ), in contrast to signs consisting of two or more movement components, such as ASL THROW (a movement produced by the hand and the elbow), which do not allow reduplication. This suggests that forms allowing reduplication have one movement component and are light syllables (i.e., consist of one mora) while those that disallow reduplication have two or more simultaneous movement elements (i.e., consist of two moras) and are therefore heavy.

4.2.3  Prosodic word The prosodic word is a unit in the prosodic hierarchy that is larger than the syllable, smaller than the phonological phrase, and exhibits non-​isomorphism with the morphosyntactic word (e.g., I’ve in I’ve not seen that film is one prosodic word but two morphosyntactic words). Several constraints that are relevant at the lexical level are tendencies seen in prosodic words in sign languages and help us understand how this unit functions. Examples include the Selected Fingers Constraint (Mandel 1981; Sandler 1986; Brentari 1998), the Monosyllabicity Constraint (Coulter 1982; Sandler 1999), and the Symmetry Constraint (Battison 1978) and are described below.

a. CLEAR

b. INFORM

c. WIFE (WOMAN^MARRY)

d. AGREE (THINK^SAME)

Figure 4.1  Four signs from ASL demonstrating the following constraints: (i) the Handshape Sequencing Constraint, (ii) the Monosyllabicity Constraint, and (iii) the Symmetry Constraint. All three constraints apply to CL E A R (a) and AGR E E (d) while I N FO R M (b) and W I FE (c) adhere to constraints (i) and (ii)

73

74

Jordan Fenlon & Diane Brentari

The Selected Fingers Constraint specifies that a sign only has one set of selected fingers in its articulation; note that ASL CLEAR (Figure 4.1a), despite having a handshape change, has the same number of selected (or extended) fingers at the beginning and end of its articulation. The Monosyllabicity Constraint refers to the strong tendency in sign languages for signs to consist of one syllable (i.e., one movement); ASL CLEAR has only one path movement. Finally, the Symmetry Constraint refers to two-​handed signs in which both hands move during the articulation of a sign. In such cases, the non-​ dominant hand is specified for the same handshape, location, and movement as the dominant hand, as in ASL CLEAR . While all three constraints are constraints on stems (lexical forms), we also see them at work in polymorphemic words  –​that is, prosodic words. For example, ASL IN FORM (Figure 4.1b) can be modified spatially to mark the agent and patient without violating the Selected Fingers Constraint. ASL compounds tend to follow the Monosyllabicity Constraint; in ASL WIFE (WOMAN^MARRY ) in Figure 4.1c, the repeated movement associated with WOMAN is deleted so that the resulting compound has the appearance of a monosyllabic sign. Lastly, in ASL AGREE (THINK^SAME ) in Figure 4.1d, phonological features from both THINK and SAME are assimilated so that the Symmetry Condition is followed. The prosodic word has been demonstrated to be the domain of application for an additional phonological process on two-​handed signs known as coalescence in Israeli Sign Language (ISL) (Sandler 1999, 2012). In this process, a prosodically weak function sign undergoes restructuring such that it is attached to a lexical sign; the result is a single prosodic word as in Figure 4.2.

SHOP

THERE

SHOP-THERE (cliticized)

Figure 4.2  Coalescence in a prosodic word in ISL (reprinted with permission from W. Sandler and D. Lillo-​Martin, Sign language and linguistic universals, Cambridge University Press)

The left image in Figure 4.2 shows the two-​handed sign SHOP and the pronoun THERE produced next to each other. In the right image in Figure 4.2, the pronoun THERE has cliticized to the dominant hand of the host sign, SHOP , and the outward movement associated with the pointing sign TH ERE is omitted so that the resulting output has the appearance of a single monosyllabic sign. This process is considered to be post-​lexical since it is non-​structure preserving; that is, the resulting output violates the Symmetry Constraint (Sandler 1999). The process is also optional; its occurrence is determined primarily by rhythmic position. Coalescence is triggered by the appearance of two signs –​a lexical and a functional pointing sign –​in phrase-​final position of a phonological phrase. 74

75

Prosody

Since stress assignment is conducted with reference to lexical categories, and pronouns in phrase-​final position are not stressed (Wilbur 1999b), it is the lexical item that is stressed and the pronoun that is reduced (i.e., it loses it syllabicity and attaches to a neighboring lexical host). Patterns of mouth spreading have been used as further evidence that the cliticized forms mentioned above constitute a single prosodic unit. In the ISL example, the sign S H O P was simultaneously produced with the (silent) mouthing of the Hebrew word xanut meaning ‘shop’. Although mouthings are typically articulated with the production of their corresponding manual sign, the mouthed Hebrew word xanut spread from the ISL sign S H O P onto the neighboring sign T H E R E , thus providing further evidence that this combination should be viewed as a single prosodic unit (similar claims have been made for Swiss German Sign Language (DSGS) by Boyes Braem (2001)). Brentari & Crossley (2002) also observe similar spreading behavior with mouth gestures in their ASL dataset and formulate a constraint of one specification for the lower face per prosodic word. Cross-​linguistic variation in mouth spreading behavior has also been reported in Crasborn et  al. (2008). Here, the direction of spreading for mouthings appears to be:  strictly rightwards in British Sign Language (BSL); rightwards in Sign Language of the Netherlands (NGT) with some evidence of bidirectional spreading; and in both directions in Swedish Sign Language (SSL). In each case, spreading was not always from content sign to function sign, and there was not always evidence that neighboring signs had cliticized to a host sign. Crasborn et al. (2008) tentatively suggest that their findings are indicative of language-​specific differences although further research with larger populations is required. They also note that the domain of mouth spreading can span more than two signs suggesting that it is not limited to the prosodic word but can span larger prosodic constituents.

4.2.4  Phonological phrase The next level in the prosodic hierarchy is the phonological phrase. Nespor & Vogel (1986) define the phonological phrase as consisting of a lexical head (X) and all the elements on its non-​recursive side up to another head outside the maximal projection of X. In other words, phonological phrases tend to correspond with syntactic constituents such as noun and verb phrases. The existence of phonological phrases in spoken languages is often argued for with reference to phonetic correlates and phonological rules that have this constituent as their domain (e.g., Raddoppiamento Sintattico in Italian, see Nespor & Vogel (1986: 165)). Phonological phrases are also attested in sign languages, although much of the evidence to date comes from research involving ISL and ASL. Only one phonological process has been claimed to have the phonological phrase as its domain –​that is, Non-​dominant Hand Spread in ISL (Nespor & Sandler 1999; Sandler 2012). This is an optional post-​ lexical phonological process where systematic spreading behavior is observed on the non-​dominant hand with the phonological phrase as its domain. In such cases, the non-​ dominant hand can spread either leftwards, rightwards, or in both directions beyond the sign for which it is lexically specified but not beyond the boundary of a phonological phrase. In some cases, the non-​dominant hand may not always spread up to a phonological phrase boundary because spreading may be interrupted by the articulation of another two-​handed sign. An example of this spreading behavior in ISL is provided below (from Sandler 2012). 75

76

Jordan Fenlon & Diane Brentari

CAKE ]PP

[ BAKE ‘I baked a cake.’

Figure 4.3  Example of non-​dominant hand spreading behavior within an ISL phonological phrase (PP) (reprinted with permission from W. Sandler and D. Lillo-​Martin, Sign language and linguistic universals, Cambridge University Press)

In Figure 4.3, BAK E is a two-​handed sign in ISL, and the non-​dominant hand (the signer’s left hand in Figure 4.3) involved in its articulation is held in position over the production of the sign CAK E , a one-​handed sign. In theory, the non-​dominant hand cannot spread onto signs following CAK E , since this would extend across a phonological phrase boundary. For other sign languages, this optional process is not always observed. Brentari & Crossley (2002) report ASL examples where the non-​dominant hand can spread beyond a phonological phrase boundary. Similarly, in Sign Language of the Netherlands (NGT) and Russian Sign Language (RSL), spreading of the non-​dominant hand appears not to be constrained by phonological phrase boundaries (Kimmelman et al. 2016). At the level of the phonological phrase, several manual cues have been observed to mark phrase boundaries as well. In ASL, signs in phrase-​final position are noted to be lengthened, repeated, or held in position longer making them more prominent than phrase-​medial signs (Brentari & Crossley 2002; Wilbur 1999b). This would imply that ASL conforms to the tendency underlying relative prominence and phonological phrases set out for spoken language (Nespor & Vogel 1986) in that the rightmost element of a phrase (since ASL is a head-​complement language) is marked for prominence. Such markers are present in both phonological and intonational phrases, but they are present to a lesser degree at phonological phrase boundaries and to a greater degree at intonational phrase boundaries.

4.2.5  Intonational phrase The next prosodic constituent above the phonological phrase is the intonational phrase. Intonational phrases typically correspond with the root sentence although constructions that are structurally external to the sentence (such as parentheticals, non-​restrictive relative clauses, and topicalizations) often form intonational phrases on their own. In spoken languages, intonational phrases are the domain of application for intonational contours and other segmental phonological rules (Nespor & Vogel 1986). These intonational contours, in turn, are associated with a wide range of pragmatic meanings. For example, a contour that rises towards the end of an utterance is typically associated 76

77

Prosody

with interrogatives, and one that falls towards the end of an utterance is associated with declaratives. The edges of intonational phrases are typically marked with pauses and phrase-​final lengthening. The latter has been noted for many spoken languages cross-​ linguistically and is suggested to be a feature independent of any specific language (Vaissiere 1983). As in spoken languages, similar constructions (e.g., parentheticals, non-​ restrictive relative clauses, and topicalized elements) in ASL also tend to form intonational phrases (Sandler & Lillo-​Martin 2006). Several manual and non-​manual markers have been associated with these constituents for many sign languages. These prosodic markers do not always occur independently but combine with one another sequentially and simultaneously (i.e., prosodic layering, see Wilbur (2000)). Manual markers associated with the intonational phrase include stressed signs (or signs in focus) produced in phrase-​final position (Wilbur & Zelaznik 1997; Wilbur 1997, 1999b; Nespor & Sandler 1999), lengthened pauses (Grosjean & Lane 1977), and phrase-​ final lengthening (or holds) (Brentari et  al. 2011). Signs that are stressed are longer, produced higher in the signing space, are larger, and display increased muscle tension and sharper transition boundaries (see Nespor & Sandler 1999; Wilbur 1999b; Crasborn & van der Kooij 2013). In contrast to languages like English where stress can be shifted to the lexical item in focus, sign languages are said to prefer prominence in phrase-​ final position (Wilbur 1997; Nespor & Sandler 1999). The length of a pause can also be correlated with the strength of a boundary. In an examination of pauses produced by five native signers of ASL, the mean pause duration was highest between sentences (229ms) and lower at lower-​level boundaries (e.g., 134ms between conjoined clauses), as also noted for spoken languages, although sign language pauses are shorter overall (for spoken languages: > 445ms between sentences; 245–​445ms between conjoined sentences) (Grosjean & Lane 1977). Some non-​manual markers such as blinks have been associated with the boundaries of intonational phrases. Different categories of blinks have been proposed in the literature but we focus on boundary blinks here.1 Blinks at intonational phrase boundaries have been attested for many sign languages (Baker & Padden 1978; Wilbur 1994; Nespor & Sandler 1999; Sze 2008; Herrmann 2010). A cross-​linguistic analysis of the distribution of blinks documented in Tang et al. (2010) reveals that, although blinks in ASL, DSGS, Hong Kong Sign Language (HKSL), and Japanese Sign Language (JSL) consistently occur at intonational phrase boundaries, blinks in HKSL were also noted to frequently occur at lower-​level prosodic boundaries. The fact that blinks consistently occur at intonational phrase boundaries has led researchers to liken blinking to the act of breathing in spoken languages. However, Sze (2008) notes that the two differ in that markers for intonational phrase boundaries in spoken languages are directly related to the articulation of speech while blinks produced during signing are not, and she therefore questions their reliability as a consistent marker of intonational phrases. In contrast to the markers described above, which are associated with the edges of intonational phrases, other markers are associated with the domain of the intonational phrase (i.e., they typically span the entire phrase). These markers are typically produced on the face although other larger non-​manual markers such as the head or the torso are also associated with the domain of the intonational phrase. Facial expressions produced when signing have been described as being analogous to intonation in spoken languages in terms of their function (Nespor & Sandler 1999). For example, the use of furrowed or raised eyebrows with a constituent, as illustrated in Figure 4.4, can mark an interrogative statement, just as a rising intonational contour does in spoken languages. 77

78

Jordan Fenlon & Diane Brentari

Furrowed brows

Raised brows

Figure 4.4  Interrogative facial expressions for questions seeking information (wh-​questions; left) and confirmation (yes/​no-​questions; right)

Facial expressions produced while signing have often been described as componential. That is, different components each provide a specific contribution to the overall meaning of the sentence (see also Wilbur, Chapter  24). An example of the facial expression associated with counter-​factual conditionals in ISL will illustrate this point. In ISL, brow raise (‘br’) signals that the phrase this marker spans is linked to the following phrase. Additionally, lower-​lid squint (‘sq’) is said to designate shared information between the signer and the addressee. When these two markers are combined, as in (2), they typically characterize counter-​factual conditionals.                           br + sq

(2)

IF GOALK EEPER H E CATCH BALL, WIN GAME WIN

‘If the goalkeeper had caught the ball, they would have won the game.’                                 (ISL, Dachkovsky & Sandler 2009: 306) Here, the brow raise and the lower-​lid squint make a contribution to the overall meaning of the sentence. The lower-​lid squint acknowledges that the signer is aware that the event did not happen, and the brow raise connects the information in the first and subsequent clause (what would have happened if the first clause was true) (Sandler & Lillo-​Martin 2006; Dachkovsky & Sandler 2009). These markers are similar to intonational tunes since they can be broadly interpreted when viewed independently but gain specificity when produced in combination with other features and with the sentences they are co-​ articulated with. This fact lends support to the argument that these markers perform a similar function to intonational tunes in spoken languages.

4.2.6  Relationship between syntactic and prosodic structure Our description of sign language prosody in this chapter has made explicit reference to syntactic structure. For example, phonological phrases typically correspond with noun phrases and verb phrases, and intonational phrases can often be associated with clauses. Although many acknowledge that syntactic and prosodic constituents are non-​ isomorphic with one another (sometimes an intonational phrase may consist of two syntactic clauses) and are autonomous, there are differences of opinion in how the two systems interface with one another (see also Wilbur, Chapter 24). 78

79

Prosody

On the one hand, syntax and prosody may have a direct link, and non-​isomorphic cases are exceptions to the general principles of alignment (e.g., Selkirk 2011). Within the field of sign language linguistics, this view is reflected in work such as Petronio & Lillo-​Martin (1997), Neidle et al. (2000), and Pfau & Quer (2010). Earlier descriptions of sign languages (notably ASL) were quick to observe that non-​manual features could be associated with specific constituent types (Liddell 1980). For example, yes/​no questions are marked with raised eyebrows, widened eyes, and the head and shoulders are moved forward while wh-​questions are marked with furrowed brows, (sometimes) combined with a backward head tilt. Raised eyebrows were also noted to be associated with topicalized constituents, relative clauses, and conditionals. An examination of the literature reveals similar behavior cross-​linguistically (see Pfau & Quer 2010). In the literature, these non-​ manual components are often presented as being intrinsically associated with specific syntactic constructions (Liddell 1980; Wilbur & Patschke 1999; Neidle et al. 2000). Working within this viewpoint, researchers have examined the distribution of these non-​manual markers to make a case for underlying syntactic structure. For example, linguists have attended to the position and (lack of) spreading of non-​manual markers to make a case for the direction of movement for wh-​elements (e.g., Cecchetto et al. 2009). Others have incorporated non-​manuals within their syntactic representation as being hosted by the head of functional projections and make explicit reference to this hierarchical structure to account for the spreading behavior observed (for instance, the spread of a non-​manual marker may be limited to its c-​command domain; e.g., Neidle et al. (2000)). The opposing view is that prosodic structure primarily interacts with syntax indirectly via semantics (e.g., Pierrehumbert & Hirschberg 1990; Truckenbrodt 2012), a view which is reflected in the sign language literature by Wilbur (1999a), Nespor & Sandler (1999), Dachkovsky & Sandler (2009), and Sandler (2010). One of the main arguments in support of this view, outlined in detail in Sandler (2010) and confirmed in Brentari et al. (2015), is that prosodic constituents do not always correspond to syntactic constituents (i.e., they may be non-​isomorphic, even if isomorphy is more common). Therefore, it follows that referring to prosodic, rather than syntactic, structure provides a more fitting description of the scope of these markers. In (3a), we see a case of isomorphism, and in (3b), an example of non-​isomorphism between the non-​manual marker and the syntactic constituent in ISL is presented.      

(3)  a.

cond

IF YOU COME, I G O

‘If you come, I’ll go.’                

b.

     

y/​n

IX 2 LIK E ICE-​C REAM VAN ILLA OR CH OCOLATE

‘Do you like vanilla ice cream or chocolate ice cream?’                 (ISL, Sandler & Lillo-​Martin 2006: 463) In order to support the translation given above in (3b), as opposed to a choice between vanilla ice cream or some chocolate, the non-​manual marking associated with yes/​no questions in ISL is completed after the articulation of VANILLA . If the brow raise was functioning as a syntactic marker, then one might expect the scope of the marker to span the entire phrase. Instead, reference to prosodic (and not syntactic) structure accounts for the behavior of this non-​manual.

79

80

Jordan Fenlon & Diane Brentari

Second, prosodic phrasing of an utterance appears to be determined by signing rate. Like spoken languages, a given utterance can be phrased differently at the prosodic level depending on the speed at which it is signed. For example, Wilbur (1999b, 2009) reports that, when asked to vary signing rate, ASL signers demonstrate prosodic reorganization at the phrase level and consequently in the placement and production of non-​manual features (such as blinks and brow movement) and in the duration and number of pauses. This variability in intonational phrasing reinforces the viewpoint that sign language prosodic structure exhibits at least some degree of non-​isomorphism with syntactic structure. Furthermore, it can be expected that the prosodic phrasing of an utterance and the production of prosodic features may vary between signers depending on their style (idiolectal variation), as has been reported between sign language interpreters in the production of boundary markers (Nicodemus 2009), between deaf early and late learners of DSGS (Boyes Braem 1999) and with respect to the gender, race, and age of a signer in a study looking at rhythm in ASL (Brentari, Hill, et al. 2018). Variability in intonational phrasing depending on style and rate of speech is a characteristic that has also been observed for intonation in spoken languages (Cruttenden 1995). Lastly, work on ISL in particular has demonstrated that componential, non-​manual behavior appears in a range of structures; as mentioned above, raised brows are associated with topics, relative clauses, yes-​no questions, and conditionals. Dachkovsky & Sandler (2009) refer to descriptions of spoken languages to show that the meaning of brow raise correlates closely with the meanings associated with high boundary tones (e.g., Pierrehumbert & Hirschberg 1990). In each case, the broader meaning of brow raise gains a specific interpretation when combined with specific manual signs and other non-​manual features, as in (2) above, and when associated with syntactic structures, just as intonational tunes in spoken languages do. They differ, however, in that intonational tunes in spoken languages “consist of sequences of tones that cluster on stressed syllables and at prosodic boundaries” while in sign languages, “intonational arrays often characterize whole prosodic constituents” (Sandler 2010: 311).

4.3  Experimental studies 4.3.1  Perception of prosody A range of experimental studies within the field of sign language prosody has been conducted. These studies underline the significance of prosodic markers produced at different levels of the prosodic hierarchy and, like related spoken language studies, demonstrate that signers attend to these markers carefully. For example, signers are able to identify which sign is receiving stress in an utterance (Wilbur & Schick 1987), to indicate boundaries in a signed stream accurately (Fenlon et al. 2007), and are even sensitive to affective prosody encoded on the hands when markers from the face are absent (Reilly et al. 1992; Hietanen et al. 2004). These studies provide weight to theoretical perspectives of the underlying structure of prosodic units. For example, in a study of rhythmic perception in signing, native signers tapped along to five short ASL narratives (Allen et  al. 1991). A  close examination of the placement of these taps revealed that they were not associated with a specific part of a signed syllable. This is in contrast to spoken languages where beats are typically placed between the onset and the rime, and it lends some support to the idea that the underlying structure of a sign language syllable is fundamentally different from 80

81

Prosody

its spoken language counterpart. This perceptual study is backed up by another study on backward signing by Wilbur & Petersen (1997) which illustrates that the way signers reverse their signing does not parallel the reversal of segments in spoken languages. For example, the ASL sign TH IN K is produced with an initial movement segment where the index moves towards the forehead and a final hold segment where the index contacts the head. Reversing the segments would predict a form where the index contacts the forehead followed by movement towards the forehead. Instead, signers also reverse the direction of the movement rather than the segment alone. Based on their study, Wilbur & Petersen (1997) suggest a model of the syllable that refers specifically to movement and specifications relating to beginning and end points and argue that no further distinctions (e.g., onset, rime, etc.) are needed –​which concurs with the Prosodic Model of the syllable (Brentari 1998). Experimental studies have also included groups of participants with differing language experiences (e.g., subjects who have not been exposed to a sign language). In doing so, these studies have been able to reveal the extent to which language experience is essential for full appreciation of sign language prosody. For example, with regard to syllable structure, Berent et al. (2013) demonstrated that signers as well as non-​signers more readily associate syllable nuclei with movement rather than handshape. An early study on pauses by Grosjean & Lane (1977) reports significant agreement between two ASL judges and a non-​signer in the placement and judgment of pauses in a signed stream. In the study of rhythmic perception mentioned above, hearing non-​signers were also asked to tap along to five short ASL narratives. Their responses were judged to be similar to native signers (Allen et al. 1991). Additionally, both native signers and hearing non-​signers are reported to respond in a similar way when asked to indicate whether a sentence break has occurred in a signed stream in adults (Fenlon et al. 2007; Brentari et al. 2011). Brentari et al. (2011) showed matched pairs of strings of signs, where one was a complete intonational phrase and one was made up of two intonational phrases (e.g., [G R E E N V E G E TA B L E S R A B B I T S E AT T H E M ] vs. [G R E E N V E G E TA B L E S ] [ R A B B I T S E AT T H E M ]) to typically developing hearing 9-​month-​old babies with no exposure to a sign language. Even without sign language exposure, the babies had significantly longer looking times for strings that were complete intonational phrases. Taken together, these studies suggest that knowledge of a sign language is not a prerequisite for the detection of some sign language prosodic cues. However, it should come as no surprise that non-​signers are able to detect prosodic cues within a signed utterance. Studies involving spoken languages have illustrated how speakers are able to identify prosodic boundaries in languages that they do not know. For example, Carlson et  al. (2005) report how speakers of American English are able to detect upcoming boundaries in Swedish, even when obvious cues such as pauses are removed from the stimuli. Additionally, it would be misleading to assume that non-​ signers have no experience with the visual modality. Linguists working with speech prosody have observed that several visual markers (such as the brows, head nods, and the hands) are correlated with prosodic cues in speech (e.g., Flecha-​Garcia 2006; Krahmer & Swerts 2007). Therefore, speakers may be relying (to some degree) on their experience in interpreting audio-​visual cues of prosody in speech communication when confronted with tasks involving sign languages. But language experience still makes a valuable contribution. Native signers are better positioned to interpret these cues than non-​signers by virtue of being life-​long users of these languages. For example, in the rhythmic perception study previously mentioned 81

82

Jordan Fenlon & Diane Brentari

(Allen et al. 1991), the location of taps produced by signers and non-​signers coincided with repeated signs, signs with primary stress, and phrase-​final signs (these signs being crucial to rhythmic structure). However, the non-​signers tapped more often to signs with secondary or weak stress than the ASL signers. This tendency to respond more often to signs with secondary or weak stress indicates that experience with ASL is required in order to accurately judge rhythm in signing. In the Brentari et al. (2011) study, adult ASL signers and non-​signers were asked (i) to judge whether a sentence break occurred in an utterance presented to them, and (ii) to mark how confident they were in their decision. ASL signers were not statistically more accurate in boundary detection than non-​signers, but the two groups differed in their boundary marking strategies:  ASL signers relied on a single marker to indicate boundaries (pauses) while non-​signers relied on several markers to indicate boundaries (pauses, holds, and drop-​hands). In a more recent perceptual study, Brentari, Falk, et al. (2018) presented different types of imperatives from ASL (expressing either a command, an explanation, permission, or advice) to signers and non-​ signers from the USA and Germany. Although each imperative type has its own prosodic patterns in ASL, ASL signers were statistically more accurate than German Sign Language (DGS) signers and non-​signers at identifying the correct imperative speech act. Therefore, even though non-​signers can be accurate in perceptual tasks, they still perform differently from experienced ASL signers. The fact that DGS signers were less accurate than ASL signers suggests that language-​specific experience is important for interpreting prosodic cues reliably (as opposed to modality-​specific experience). The studies described above all show clearly that sign language prosody can be reliably perceived; in addition, some studies highlight the importance of language experience in being able to do this accurately and confidently (e.g., Allen et  al. 1991; Brentari et  al. 2011; Brentari, Falk et al. 2018). However, these findings in turn raise further questions as to the effectiveness of a particular marker when viewed in isolation or in combination with other markers, particularly given that sign languages are known to layer markers simultaneously and sequentially (cf. Wilbur 2000) and preference for a particular cue over others by signers is reported. It is not yet clear how these markers relate to one another and to other information from lexical and grammatical structure (e.g., which markers are reliable indicators of upcoming boundaries in sign languages?). More studies are needed that investigate how boundaries are perceived and the extent to which different visual markers and other cues available to the observer can contribute to the signal.

4.3.2  Acquisition There have been many studies on language acquisition that have illuminated our understanding of sign language prosody and provided further justification for some of the viewpoints expressed in this chapter. For example, the importance of movement in the definition of a sign syllable appears to be supported by the observation that the repetition of movement appears as a rhythmic sequential unit produced by deaf infants at a similar milestone as vocal babbling observed in hearing children (Pettito & Marentette 1991). Age of acquisition plays a role in the timing and distribution of prosodic markers, with early learners demonstrating more proficient use of such cues. Early learners of DSGS were found to use a side-​to-​side movement of the torso marking discourse units more frequently than later learners who instead preferred to use prosodic markers from their first language, German (Boyes Braem 1999). Differences in the timing and layering of multiple cues between early and later learners have also been reported in Stone (2009) 82

83

Prosody

and Brentari et al. (2012). Brentari et al. (2012) analyzed the narratives of three groups of highly fluent signers (native (L1) Deaf, L1 hearing, and second-​language (L2) hearing) and found that some features, such as brow raise, are more prevalent in L1 signers (both hearing and Deaf), while some other features, such as the alignment of torso leans with constituent boundaries, are more prevalent in hearing signers (both L1 and L2). Their findings suggest that language experience has an effect on prosodic expression and further that experience with co-​speech gesture may produce a specific type of prosodic pattern in hearing signers, even highly fluent ones. Together, these findings suggest that the alignment and mastery of non-​manual features is learned at an early age and that there may be a critical period for the acquisition of sign language prosody. Language acquisition studies have generally demonstrated that the developmental milestones and the time course of first language acquisition are similar for deaf children acquiring a sign language and hearing children acquiring a spoken language (see Chen Pichler (2012) for an overview). As mentioned above, signing infants appear to produce manual babbling at a similar milestone to hearing children. In addition, perceptual and neuroimaging studies looking at infants watching signed stimuli suggested that babies at 6 months of age (whether exposed to signing or not) are sensitive to prosodic cues in sign language (Stone 2017; Stone et al. 2017). Such studies mirror spoken language studies demonstrating that infants are sensitive to prosodic cues in speech from a similar age (Nazzi et al. 1998; Nazzi & Ramus 2003). Early acquisition studies involving ASL syntax have typically focused on non-​ manual markers associated with specific syntactic structures, such as interrogatives and topicalized constituents. These studies have not always referred explicitly to prosodic structure since the presiding view at the time was that these markers were derived from syntactic structure. For example, Reilly et al. (1991) explain how children demonstrate an understanding of conditional clauses at 3;0 but prefer to use manual rather than non-​manual markers to indicate them in their productions (e.g., ASL IF ). Research on the acquisition of interrogatives in ASL suggests that the acquisition process is gradual taking several years. Children learning ASL appear to produce both manual and non-​ manual markers for wh-​questions as young as 18 months but do not combine the two appropriately until the age of 6 or 7 (Reilly et al. 1991; Lillo-​Martin 2000). These studies lend support to the prosodic interpretation of these markers since they demonstrate how prosodic cues are acquired compositionally in a gradual and analytic manner. Chen Pichler (2010) suggests that the acquisition of prosodic elements may be much earlier than assumed since studies looking at topicalized constituents are typically limited to a single non-​manual marker: the eyebrows. If the domain of inquiry is expanded to include all other possible markers, then it is possible to observe earlier indicators of prosodic structuring in deaf children learning to sign. Building on Nespor & Sandler’s (1999) analysis, which suggests that prosodic markers such as widened eyes, head nods, and holds may also delimit topicalized constituents, Chen Pichler (2010) describes how utterances produced by a single child learning ASL display evidence of simple prosodic breaks, characterized by repetition or holding of the topic sign and followed by a change in head position, at 24.5 months, nearly a year earlier than has been suggested by Reilly et al. (1991). Few acquisition studies of prosody have focused on the later stages of language acquisition. Brentari et al. (2015) investigate the distribution of prosodic cues at boundaries in ASL across three age groups: two younger children (5;0–​6;1), five older children (7;8–​ 8;5), and four adults (aged between 35–​58). As with earlier acquisition studies, they find 83

84

Jordan Fenlon & Diane Brentari

evidence that prosodic cues are acquired compositionally. While there is no difference between groups in the use of sign duration, blinks, head position, and eyebrows to mark boundaries, the younger children produced longer transition durations in phrase-​final position, the older children had a lower proportion of holds in similar positions, and both groups of children had a lower proportion of changes in body position at boundaries. This finding suggests that boundary markers are distributed appropriately as early as 5  years but complete integration of all cues with prosodic structure requires more time. Furthermore, the statistical findings reported in this study also indicate that manual cues are more predictive of prosodic boundaries than non-​manual cues for all groups. For the younger children and the adults, both manual and non-​manual cues together are more predictive than manual cues alone, which is a logical finding (i.e., the more cues there are available, the better). For the older children, however, manual cues alone are more predictive of prosodic boundaries than all cue types combined. The authors reason that this group may be in the process of acquiring additional complex linguistic structures (e.g., the use of depicting constructions in which the body and the face may be recruited as a mimetic device) and that non-​manual features become more variable and undergo reorganization as a consequence. Additionally, Brentari et al. (2015) refer to several acquisition studies involving spoken language which demonstrate that mastery of durational cues (e.g., syllable-​final lengthening) appears to precede mastery of intonational cues (e.g., fundamental frequency), which parallels the general observation here that manual durational markers precede intonational non-​manual markers in the acquisition process.

4.3.3  Emergence of prosodic structure Sign languages also provide us with a unique opportunity to look at how a language emerges with relatively little input. We describe two such studies with a focus on prosody which both suggest that it plays an important role in the early stages of language creation. The first study looks at the development of prosodic structure in a single homesigner while the second study looks at its development in a newly emerging sign language that has developed within a community of signers. The term ‘homesigner’ refers to deaf children who have not been able to learn a spoken or signed language. Owing to this lack of input, homesigners use gesture in order to communicate with others. Several studies have illustrated how homesign systems develop language-​like properties such as recursion and hierarchical structure over time (see Goldin-​Meadow (2012) for an overview). Although studies have often focused on a single homesigner, similar patterns have been observed when comparing homesigners across different cultures (Goldin-​Meadow & Mylander 1998). This cross-​cultural similarity suggests that homesigners are not learning a gestural system from their parents. Applebaum et  al. (2014) study the frequency and distribution of prosodic cues in the spontaneous output of a younger American homesigner known as David to investigate whether homesigning displays the characteristics of a prosodic system typically associated with sign languages. They focus their attention on both manual features (holds, repetitions, and emphatic movements) and non-​manual features (head tilts and nods) occurring at boundaries of different types (utterance or lower-​level boundaries related to propositions) at three moments in David’s childhood (at 3;5, 3;11, and 5;2). Their results indicate that both manual and non-​manual markers (as a single category) were more frequent at the end of utterance boundaries when compared to lower-​level boundaries. 84

85

Prosody

However, no differences were noted in the mean number of prosodic features over the two-​year period in which David was observed. Although these findings suggest that prosodic features in homesign pattern in a similar way to those of sign languages, and that they can be observed in the early stages of an emerging system, the authors propose that the lack of change in the use of these features over the period of study suggests that a larger number of signers (or even an intergenerational language transmission pattern) is required for a fully-​fledged prosodic system to emerge.2 An additional study illustrates this point. In Sandler et  al. (2011), the emergence of prosodic and syntactic features in a young sign language, Al-​Sayyid Bedouin Sign Language (ABSL) is described. ABSL differs from sign languages such as ASL and BSL in that it is a sign language that has developed in the past 75 years in a Bedouin village with a high incidence of genetic deafness. It is also believed that this language developed in relative isolation from other languages and therefore without a language model (although see Kisch (2012) for a detailed description of the sociolinguistic situation in this community). The first generation of signers were four deaf children born to a single family who would have used a form of homesign like that mentioned above, although this system would have been shared among them (unlike David, who was the only person using homesign). As it is passed on to subsequent generations, ABSL offers a unique opportunity to witness how language structure emerges  –​an opportunity that one does not have with spoken languages (even pidgin speakers are initially native speakers of a language and may be influenced by their mother tongue). In their analysis of ABSL narratives by two pairs of signers separated by a period of 15 years, the older signers (those who had no consistent language model) favor timing cues to mark prosodic constituents and display limited use of intonational cues, while the younger signers (those who had a more consistent language model) use non-​manual intonational cues more frequently, and these cues typically spanned the domain of intonational phrases. The authors also observe an increase in syntactic complexity alongside prosodic complexity:  younger signers produced more noun phrases overall, and these noun phrases were clearly associated with predicates via prosodic structure. In contrast, the signing by the older signers was described as having a listing prosody, which means that their signing was difficult to comprehend overall, in part because it was difficult to determine which arguments were to be associated with which predicate. Sandler et al.’s (2011) analysis also reveals prosodic complexity among the younger signers since they were able to signal dependency between clauses in the absence of manual markers (e.g., such as using the sign IF to signal a dependent, conditional clauses). In the absence of such manual markers, prosody is the sole marker of clause dependency. Given this ability, the younger signers can link clauses, such as If the goalkeeper had caught the ball, they would have won the game (as in (2)), in a way that the older signers do not. In other words, the output of older signers consists of simple clauses while the younger signers’ productions show increased complexity. This study thus illustrates that prosodic complexity emerges alongside (or may even precede) syntactic complexity. The study also indicates that, even if some aspects of prosodic structure are present in homesign, the emergence of a fully-​ fledged prosodic system is a gradual process, requiring time and generations to develop.

4.3.4  Neurolinguistic studies Thus far, the experimental studies we have described have included typically developing populations of signers or typical cases of ontological or historical development. Lastly, 85

86

Jordan Fenlon & Diane Brentari

we turn to neurolinguistic studies. There has been little published in this area with respect to sign language prosody. Brain-​imaging studies have revealed that, as in spoken languages, sign language processing is asymmetric with a strong preference for the left hemisphere (see Emmorey (2002) for an overview). Within such studies, traditional language areas associated with speech (e.g., Broca’s and Wernicke’s area) are also active when watching someone sign, illustrating that these areas are for processing language regardless of modality. Neurolinguistic studies on spoken languages have ascribed a right-​hemisphere role for prosody (e.g., Ross & Mesulam 1979; Meyer et al. 2002; Gandour et al. 2003). Although there have been few brain-​imaging studies looking specifically at sign language prosody, a similar preference for the right hemisphere has been reported for sign languages. Newman et al. (2010) presented deaf signers with pairs of sentences in ASL that were similar in propositional content but varied in whether prosodic cues (such as facial expression or head and body movement) were present. They found increased activation in the right hemisphere for ASL sentences that were produced with marked prosodic cues. Since these areas within the right hemisphere have also been associated with the processing of speech prosody, the same neural networks within the right hemisphere appear to be recruited when processing prosodic cues whether spoken or signed. The importance of the right hemisphere for processing sign language prosody has been demonstrated in studies involving atypical signers. In a study involving BSL signers with right and left hemisphere lesions, the perception of manual and non-​ manual features of negation was tested (Atkinson et  al. 2004). Using a picture selection task, signers were asked to match the correct picture to a signed statement. Each statement varied according to whether it featured a manual (lexical) and non-​manual marker of negation, or a non-​manual marker alone. Results showed that the right-​lesioned signers were unable to fully understand negative statements when negation was encoded only through non-​manual features (i.e., manual information was absent). Results suggest that non-​manual features of negation are processed in the right hemisphere, unlike syntactic elements of sign language, and therefore may be, in part, prosodic. Such a study has important consequence for our understanding of the underlying syntactic structure since these markers feature heavily in syntactic representations (e.g., Neidle et al. 2000; Quer 2012; Pfau 2016). Additionally, descriptions of sign language prosody have often neglected to mention negation, focusing instead on facial expression as intonation and its association with prosodic constituents. The Atkinson et al. study clearly indicates that the descriptive scope perhaps should be widened to include non-​manual negation (see also Gökgöz, Chapter 12). Further studies involving atypical signers have also provided insights into underlying phonological structure. Poizner et  al. (2000) describe how the production of prosody is impaired in a group of six ASL participants diagnosed with Parkinson’s disease. These signers, ranging in severity from mild to severe, were able to demonstrate a clear understanding of the syntax and morphology of ASL but were impaired at the phonetic level demonstrating simplification of the complexity and timing properties of sign sequences. Studies involving Parkinsonian signers build on our understanding of the role of the basal ganglia in language processing by providing a different perspective, and these studies are able to directly observe the articulators involved in sign language production and how they are affected. Poizner et al. (2000) outline a number of characteristics observed in their dataset. For example, comparing mild, moderate, and severe cases of Parkinson’s Disease, across the three groups facial masking (i.e., decreasing use of 86

87

Prosody

non-​manual features on the face) increases and path movement is transferred from proximal to more distal joints. There is also a notable disruption to the timing cues such as pauses: pause length does not appear to be correlated with boundary type (i.e., the differences in pause length between word-​final, phrase-​final, and utterance-​final pauses are reduced) leading to a lack of rhythmic variation typically observed in the control group. Movement in between signs was also affected. In typical signing handshape change and movement are closely coordinated within a sign in contrast to between signs. This contrast was notably absent in Parkinsonian signers, who tend to anticipate handshapes to upcoming signs earlier during transitional movements thus obscuring boundaries between signs (Brentari & Poizner 1994). The lack of dynamic changes in the non-​manuals and in the joints producing the signs’ movement (not the size of movement, per se; see Wilbur & Schick (1987); Wilbur (1990)) as well as the lack in rhythmic variation and clear boundaries between signs contribute to the perception that the signing is monotonous. These characteristics have been used to make a case for underlying phonological structure within the Prosodic Model (Brentari 1998). For example, movement migration (from a proximal to more distal joint) lends support to the view that there is an internal hierarchical structure for the representation of movement that refers to the joints typically used to articulate signs.

4.4  Future directions: the relationship between audio-​visual prosody and sign language prosody An important area for future enquiry is the relationship between audio-​visual prosody and sign language prosody. There is a growing body of literature demonstrating that audio-​visual markers produced during speech are aligned with spoken prosody. Head movements have been linked to the production of suprasegmental features of speech such as prominence (e.g., Hadar et al. 1983; Graf et al. 2002). Eyebrow raises align with the intonational contour during speech by occurring with pitch accents (Flecha-​Garcia 2006; Guaïtella et al. 2009). The timing of body movements (such as movements of the head and the hands) and underlying prosodic structure are closely related. For example, manual gestures are noted to coincide with an accented syllable (e.g., McNeill 1992). These studies illustrate that the use of particular markers in production tends to be idiosyncratic in degree or strength, but for each case the marker is consistently time-​aligned with prosodic features in speech. In addition, a number of studies have explored the perception of such audio-​visual markers in experimental settings demonstrating that these markers are not simply redundant features but make an important contribution to communication. For example, addressees can use head movements to determine which word in a sentence is receiving emphatic stress and to discriminate statements from questions (Bernstein et al. 1998) and have learned where to direct their attention for different aspects of speech spending more time focusing on the upper part of the face when asked to make decisions regarding intonation in vision-​only conditions (Lansing & McConkie 1999). When these visual cues are not aligned appropriately with speech, they can have a negative effect on reaction times (e.g., Swerts & Krahmer 2005). Manual gestures have also been demonstrated to act as an important cue to prosodic structure by assisting in resolving sentence ambiguities (Guellai et al. 2014). Studies comparing responses in audio-​only, vision-​only, and audio-​visual conditions have sometimes reported that participants do best in audio-​visual conditions. Barkhuysen 87

88

Jordan Fenlon & Diane Brentari

et al. (2008), for instance, found that participants, when asked to indicate the end of an utterance (presented in full or fragmented), were more successful in the audio-​visual condition than the audio condition. Borràs-​Comes & Prieto (2011) conducted two perceptual experiments with acoustic and visual prosodic cues showing that facial gestures are the most influential elements that Catalan listeners rely on to decide between contrastive focus and echo question interpretations, but bimodal integration with the acoustic cues was necessary for perceptual processing to be fast and accurate. These studies suggest that a bimodal presentation enhances perception since more cues are available to the participants. However, this may not always be the case. Krahmer et  al. (2002) report a stronger effect for the audio-​only condition in the perception of prominence in a synthesized talking head. They reason that, since speakers have learned to pay more attention to cues in speech than on the face, speech has a more dominant role in language perception; yet, in cases where audio cues are unclear, visual markers can make a valuable contribution to processing. For example, non-​manuals have been shown to enhance comprehension when the auditory signal is whispered, and have less of an effect when acoustic cues are clear (Dohen & Lœvenbruck 2009). What do these studies mean for sign language prosody? Since there is growing evidence supporting a role for audio-​visual prosody in spoken language production and perception, future work on spoken language prosody will adopt a multi-​modal analysis of language production and routinely include properties of the face and body. As Guellai et al. (2014) state: “[…] spontaneous gestures and speech form a single communication system where the suprasegmental aspects of spoken languages are mapped to the motor-​ programs responsible for the production of both speech sounds and hand gestures.” Such conclusions encourage us to directly compare the production of sign language prosody to a multi-​modal analysis of speech communication. This comparison is appropriate since, here, like is being compared with like  –​by focusing on non-​manual prosodic material in both signed and spoken languages. Very few studies have been conducted within the sign language field that incorporate such an approach. One such study, in which blinks produced by signers are compared with blinks produced by the surrounding speaking community, is presented by Tang et al. (2010). While they found a difference in blink rate between ASL signers and American speakers, they did not find a difference in blink rate between HKSL signers and Cantonese speakers. They also observed that the distribution of blinks differed between signers and speakers, with blinks in the latter group not being correlated with the edge of prosodic phrases. The authors conclude that the mixed statistical findings indicate that –​at least for HKSL –​some influence of the surrounding spoken language community cannot be ruled out. When further studies comparing sign language prosody with audio-​visual prosody are conducted, we will be in a position to highlight what is unique to the prosodic structure of sign languages, and what is characteristic of face-​to-​face communication generally.

4.5  Summary and conclusion In both signed and spoken languages, prosody plays an important role in meaning and, for both types of languages, displays a similar type of hierarchical structure. Across sign languages, reference to both non-​manual and manual markers has proven to be essential in our understanding of sign language prosody. As explained, manual markers generally provide timing cues to prosodic constituents while non-​manual markers provide intonational cues. The importance of these markers in the prosodic organization of sign 88

89

Prosody

languages have also been demonstrated in experimental studies focusing on typically developing populations of signers as well as atypical signers and cases of ontological or historical development. Although studies focusing on spoken languages have revealed that cues on the face and body are closely aligned with spoken prosody, sign languages differ from spoken languages because the visual channel is their primary channel for communication. For users of spoken languages, both the audio and visual channel are available in face-​to-​face communication, and both make a contribution to language production and comprehension. Therefore, the extent to which spoken and sign languages differ in their use of visual cues for prosody allows us to understand how visual markers of prosody (such as cues on the face) are further recruited to convey prosodic information when the visual channel is the only available means for communication. The survey of the field presented here also suggests that there are some cross-​modal communicative patterns that may underlie certain prosodic patterns in all languages, such as phrase-​final lengthening and adding cues for prominence, and that signers and non-​ signers can detect some basic prosodic structures in foreign languages, even ones in another modality, and even when they involve sentence meaning. Research on sign languages (as well as work on audio-​visual prosody), therefore, encourages us to look beyond speech when defining prosody. In doing so, we can begin to uncover and understand some of the fundamental ways in which we structure complex streams of information.

Acknowledgments We thank Wendy Sandler for supplying the images shown in Figures  4.2 and 4.3 and Adam Stone and Ryan Barrett for modeling the examples shown in Figures 4.1 and 4.4, respectively.

Notes 1 In addition to boundary blinks, other blink types include voluntary lexical blinks performing a semantic/​prosodic function marking emphasis, assertion, or stress, physiologically induced blinks (by the hands moving too close to the eyes), blinks associated with a change in gaze or head position, and blinks associated with hesitation. 2 The authors also note that their findings may be obscured by the characteristically short utterances (consisting of two or three signs) David produces. Utterances of longer length may reveal more patterns in prosodic distribution.

References Allen, George D., Ronnie Wilbur, & Brenda Schick. 1991. Aspects of rhythm in American Sign Language. Sign Language Studies 72. 297–​320. Applebaum, Lauren, Marie Coppola, & Susan Goldin-​Meadow. 2014. Prosody in a communication system developed without a language model. Sign Language & Linguistics 17(2). 181–​212. Atkinson, Jo, Ruth Campbell, Jane Marshall, Alice Thacker, & Bencie Woll. 2004. Understanding ‘not’:  neuropsychological dissociations between hand and head markers of negation in BSL. Neuropsychologia 42(2). 214–​229. Baker, Charlotte & Carol Padden. 1978. Focusing on the non-​manual components of American Sign Language. In Patricia Siple (ed.), Understanding language through sign language research, 27–​58. New York: Academic Press. Barkhuysen, Pashiera, Emiel Krahmer, & Marc Swerts. 2008. The interplay between auditory and visual modality for end-​of-​utterance detection. Journal of the Acoustical Society of America 123(1). 354–​365.

89

90

Jordan Fenlon & Diane Brentari Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring, MD: Linstok Press. Berent, Iris, Amanda Dupuis, & Diane Brentari. 2013. Amodal aspects of linguistic design. PLoS ONE 8(4), e60617. Bernstein, Lynne E., Silvio P. Eberhardt, & Marylin E. Demorest. 1998. Single-​channel vibrotactile supplements to visual perception of intonation and stress. Journal of the Acoustical Society of America 85. 397–​405. Borràs-​Comes, Joan & Pilar Prieto. 2011. ‘Seeing tunes.’ The role of visual gestures in tune interpretation. Journal of Laboratory Phonology 2. 355–​380. Boyes Braem, Penny. 1999. Rhythmic temporal patterns in the signing of deaf early and late learners of Swiss German Sign Language. Language and Speech 42(2–​3). 177–​208. Boyes Braem, Penny. 2001. Functions of the mouthing component in Swiss German Sign Language. In Diane Brentari (ed.), Foreign vocabulary in sign languages, 1–​45. Mahwah, NJ:  Lawrence Erlbaum. Brentari, Diane. 1990. Licensing in ASL handshape change. In Ceil Lucas (ed.), Sign language research: Theoretical issues, 57–​68. Washington, DC: Gallaudet University Press. Brentari, Diane. 1993. Establishing a sonority hierarchy in American Sign Language: The use of simultaneous structure in phonology. Phonology 10(2). 281–​306. Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA: MIT Press. Brentari, Diane. 2002. Modality differences in sign language phonology and morphophonemics. In Richard P. Meier, David Quinto-​Pozos & Kearsy Cormier (eds.), Modality and structure in signed and spoken languages, 35–​64. Cambridge: Cambridge University Press. Brentari, Diane & Laurinda Crossley. 2002. Prosody on the hands and face:  Evidence from American Sign Language. Sign Language & Linguistics 5(2). 105–​130. Brentari, Diane, Joshua Falk, Anastasia Giannakidou, Annika Herrmann, Elisabeth Volk, & Markus Steinbach. 2018. Production and comprehension of prosodic markers in sign language imperatives. Frontiers in Psychology: Language Sciences (Special Issue on Visual Language) 9. Brentari, Diane, Joshua Falk, & George Wolford. 2015. The acquisition of prosody in American Sign Language. Language 91(3). 144–​168. Brentari, Diane, Carolina González, Amanda Siedl, & Ronnie Wilbur. 2011. Sensitivity to visual prosodic cues in signers and nonsigners. Language and Speech 54(1). 49–​72. Brentari, Diane, Joseph Hill, & Brianne Amador. 2018. Variation in phrasal rhythm in sign languages: Introducing ‘rhythm ratio’. Sign Language & Linguistics 21(1). 38–​74. Brentari, Diane, Marie A. Nadolske, & George Wolford. 2012. Can experience with co-​speech gesture influence the prosody of a sign language? Sign language prosodic cues in bimodal bilinguals. Bilingualism: Language and Cognition 15(02). 402–​412. Brentari, Diane & Howard Poizner. 1994. A phonological analysis of a deaf Parkinsonian signer. Language and Cognitive Processes 9(1). 69–​100. Carlson, Rolf, Julia Hirschberg, & Marc Swerts. 2005. Cues to upcoming Swedish prosodic boundaries:  Subjective judgment studies and acoustic correlates. Speech Communication 46(3–​4). 326–​333. Cecchetto, Carlo, Carlo Geraci, & Sandro Zucchi. 2009. Another way to mark syntactic dependencies: The case for right-​peripheral specifiers in sign languages. Language 85(2). 278–​320. Chen Pichler, Deborah. 2010. Using early ASL word order to shed light on word order variability. In Merete Anderssen, Kristine Bentzen, & Marit Westergaard (eds.), Variation in the input:  Studies in the acquisition of word order (Studies in Psycholinguistics, Vol 39), 157–​177. Dordrecht: Springer. Chen Pichler, Deborah. 2012. Acquisition. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook (HSK –​Handbooks of Linguistics and Communication Science), 647–​686. Berlin: De Gruyter Mouton. Corina, David. 1990. Reassessing the role of sonority in syllable structure: Evidence from a visual-​ gestural language. Papers from the 26th Annual Meeting of the Chicago Linguistic Society, Vol 2: Parasession on the Syllable in Phonetics and Phonology, 33–​44. Chicago, IL: University of Chicago. Coulter, Geoffrey. 1982. On the nature of ASL as a monosyllabic language. Paper presented at the Annual Meeting of the Linguistic Society of America, San Diego, CA.

90

91

Prosody Crasborn, Onno & Els van der Kooij. 2013. The phonology of focus in Sign Language of the Netherlands. Journal of Linguistics 49(3). 515–​565. Crasborn, Onno, Els van der Kooij, Dafydd Waters, Bencie Woll, & Johanna Mesch. 2008. Frequency distribution and spreading behaviour of different types of mouth actions in three sign languages. Sign Language & Linguistics 11(1). 46–​67. Cruttenden, Alan. 1995. Intonation. Cambridge: Cambridge University Press. Cutler, Anne, Delphine Dahan, & Wilma van Donselaar. 1997. Prosody in the comprehension of spoken language: A literature review. Language and Speech 40(2). 141–​201. Dachkovsky, Svetlana & Wendy Sandler 2009. Visual intonation in the prosody of a sign language. Language and Speech 52(2–​3). 287–​314. Dohen, Marion & Hélène Lœvenbruck. 2009. Interaction of audition and vision for the perception of prosodic contrastive focus. Language and Speech 52(2–​3). 177–​206. Duarte, Kyle. 2012. Motion capture and avatars as portals for analyzing the linguistic structure of signed languages. Université de Bretagne Sud PhD dissertation. Emmorey, Karen. 2002. Language, cognition, and the brain: Insights from sign language research. Mahwah, NJ: Lawrence Erlbaum. Fenlon, Jordan, Tanya Denmark, Ruth Campbell, & Bencie Woll. 2007. Seeing sentence boundaries. Sign Language & Linguistics 10(2). 177–​200. Flecha-​Garcia, Maria L. 2006. Eyebrow raising in dialogue: Discourse structure, utterance function, and pitch accents. Edinburgh: University of Edinburgh PhD dissertation. Gandour, Jackson, Mario Dzemidzic, Donald Wong, Marc Lowe, Yunxia Tong, Li Hsieh, Nakarin Satthamnuwong, & Josep Lurito. 2003. Temporal integration of speech prosody is shaped by language experience: An fMRI study. Brain and Language 84(3). 318–​336. Geluykens, Ronald & Marc Swerts. 1994. Prosodic cues to discourse boundaries in experimental dialogues. Speech Communication 15. 69–​77. Geraci, Carlo. 2009. Epenthesis in Italian Sign Language. Sign Language & Linguistics 12. 3–​51. Goldin-​Meadow, Susan. 2012. Homesign: Gesture to language. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language: An international handbook (HSK –​Handbooks of Linguistics and Communication Science), 601–​625. Berlin: De Gruyter Mouton. Goldin-​Meadow, Susan & Carolyn Mylander. 1998. Spontaneous sign systems created by deaf children in two cultures. Nature 391. 279–​281. Graf, Hans Peter, Eric Cosatto, Volker Strom, & Fu Jie Huang. 2002. Visual prosody:  Facial movements accompanying speech. Proceedings of the 5th IEEE International Conference on Automatic Face and Gesture Recognition, 396–​401. Washington, DC. Grosjean, François & Harlan Lane. 1977. Pauses and syntax in American Sign Language. Cognition 5. 101–​117. Guaïtella, Isabelle, Serge Santi, Benoît Lagrue, & Christian Cavé. 2009. Are eyebrow movements linked to voice variations and turn-​taking in dialogue? An experimental investigation. Language and Speech 52(2–​3). 207–​222. Guellai, Bahia, Alan Langus, & Marina Nespor. 2014. Prosody in the hands of the speaker. Frontiers in Psychology 5. 1–​8. Hadar, Uri, Timothy J. Steiner, E.C. Grant, & Frank Clifford Rose. 1983. Head movement correlates of juncture and stress at sentence level. Language and Speech 26. 117–​129. Herrmann, Annika. 2010. The interaction of eye blinks and other prosodic cues in German Sign Language. Sign Language & Linguistics 13(1). 3–​39. Hietanen, Jari K., Jukka M. Leppänen, & Ulla Lehtonen. 2004. Perception of emotion in the hand movement quality of Finnish Sign Language. Journal of Nonverbal Behavior 28(1). 53–​64. Kimmelman, Vadim, Anna Sáfár, & Onno Crasborn. 2016. Towards a classification of weak hand holds. Open Linguistics 2(1). 211–​234. Kisch, Shifra. 2012. Demarcating generations of signers in the dynamic sociolinguistic landscape of a shared sign language: The case of the Al-​Sayyid Bedouin. In Ulrike Zeshan & Connie de Vos (eds.), Sign languages in village communities: Anthropological and linguistic insights, 87–​126. Nijmegen: Ishara Press. Krahmer, Emiel, Zsófia Ruttkay, Marc Swerts, & Wieger Wesselink. 2002. Pitch, eyebrows and the perception of focus. Proceedings of the Speech Prosody 2002 Conference, 443–​446. Aix-​en-​ Provence: Labaratoire Parole et Language.

91

92

Jordan Fenlon & Diane Brentari Krahmer, Emiel & Marc Swerts. 2007. The effect of visual beats on prosodic prominence: Acoustic analyses, auditory perception and visual perception. Journal of Memory and Language 57. 396–​414. Lansing, Charissa R. & George W. McConkie. 1999. Attention to facial regions in segmental and prosodic visual speech perception tasks. Journal of Speech, Language and Hearing Research 42. 526–​539. Liddell, Scott K. 1980. American Sign Language syntax. The Hague: Mouton. Liddell, Scott K. & Robert E. Johnson. 1989. American Sign Language: the phonological base. Sign Language Studies 64. 195–​278. Lillo-​Martin, Diane. 2000. Early and late in language acquisition: Aspects of the syntax and acquisition of WH-​questions in American Sign Language. In Karen Emmorey & Harlan Lane (eds.), The signs of language revisited: An anthology to honor Ursula Bellugi and Edward Klima, 401–​ 414. Mahwah, NJ: Lawrence Erlbaum. Mandel, Mark. 1981. Phonotactics and morphophonology in American Sign Language. Berkeley, CA: University of California PhD dissertation. McNeill, David. 1992. Hand and mind: What gestures reveal about thought. Cambridge: Cambridge University Press. Meyer, Martin L., Kai Alter, Angela D. Friederici, Gabriele Lohmann, & D. Yves von Cramon. 2002. FMRI reveals brain regions mediating slow prosodic modulations in spoken sentences. Human Brain Mapping 17(2). 73–​88. Nazzi, Thierry, Josiane Bertoncini, & Jacques Mehler. 1998. Language discrimination by newborns: Towards an understanding of the role of rhythm. Journal of Experimental Psychology. Human Perception and Performance 24(3). 756–​766. Nazzi, Thierry & Franck Ramus. 2003. Perception and acquisition of linguistic rhythm by infants. Speech Communication 41(1). 233–​243. Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, & Robert Lee. 2000. The syntax of American Sign Language:  Functional categories and hierarchical structure. Cambridge, MA: MIT Press. Nespor, Marina & Wendy Sandler. 1999. Prosody in Israeli Sign Language. Language and Speech 42(2–​3). 143–​176. Nespor, Marina & Irene Vogel. 1986. Prosodic phonology. Dordrecht: Foris Publications. Newman, Aaron J., Ted Supalla, Peter C. Hauser, Elissa L. Newport, & Daphne Bavelier. 2010. Prosodic and narrative processing in American Sign Language:  an fMRI study. NeuroImage 52(2). 669–​676. Nicodemus, Brenda. 2009. Prosodic markers and utterance boundaries in American Sign Language interpretation. Washington, DC: Gallaudet University Press. Perlmutter, David M. 1992. Sonority and syllable structure in American Sign Language. Linguistic Inquiry 23. 407–​442. Petronio, Karen & Diane Lillo-​Martin. 1997. WH-​movement and the position of spec-​CP: Evidence from American Sign Language. Language 73. 18–​57. Pettito, Laura & Paula Marentette. 1991. Babbling in the manual mode: Evidence for the ontogeny of language. Science 25. 1493–​1496. Pfau, Roland. 2016. A featural approach to sign language negation. In Pierre Larrivée & Chungmin Lee (eds.), Negation and polarity. Experimental perspectives, 45–​74. Dordrecht: Springer. Pfau, Roland & Josep Quer. 2010. Nonmanuals: Their grammatical and prosodic roles. In Diane Brentari (ed.), Sign languages, 381–​402. Cambridge: Cambridge University Press. Pierrehumbert, Janet & Julia Hirschberg. 1990. The meaning of intonational contours in discourse. In Philip R. Cohen, Jerry Morgan, & Martha E. Pollack (eds.), Intentions in communication, 271–​311. Cambridge, MA: MIT Press. Poizner, Howard, Diane Brentari, Judy Kegl, & Martha E. Tyrone. 2000. The structure of language as motor behavior: Evidence from signers with Parkinson’s Disease. In Karen Emmorey & Harlan Lane (eds.), The signs of language revisited: An anthology to honor Ursula Bellugi and Edward Klima, 509–​532. Mahwah, NJ: Lawrence Erlbaum. Quer, Josep. 2012. Negation. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language:  An international handbook (HSK  –​Handbooks of Linguistics and Communication Science), 316–​338. Berlin: De Gruyter Mouton.

92

93

Prosody Reilly, Judy, Marina McIntire, & Ursula Bellugi. 1991. Babyface: A new perspective on universals of language acquisition. In Patricia Siple & Susan Fischer (eds.), Theoretical issues in sign language research: Psycholinguistics, 9–​23. Chicago: University of Chicago Press. Reilly, Judy, Marina McIntire, & Howie Seago. 1992. Affective prosody in American Sign Language. Sign Language Studies 75. 113–​128. Ross, Elliott D. & Marek-​Marsel Mesulam. 1979. Dominant language functions of the right hemisphere? Prosody and emotional gesturing. Archives of Neurology 36(3). 144–​148. Sandler, Wendy 1986. The spreading hand autosegment of American Sign Language. Sign Language Studies 50. 1–​28. Sandler, Wendy. 1989. Phonological representation of the sign:  Linearity and non-​ linearity in American Sign Language. Dordrecht: Foris. Sandler, Wendy. 1993. A sonority cycle in American Sign Language. Phonology 10(2). 209–​241. Sandler, Wendy. 1999. Cliticization and prosodic words in a sign language. In Tracy A. Hall & Ursula Kleinhenz (eds.), Studies on the phonological word, 223–​254. Amsterdam: John Benjamins. Sandler, Wendy. 2010. Prosody and syntax in sign languages. Transactions of the Philological Society 108(3). 298–​328. Sandler, Wendy. 2012. Visual prosody. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook (HSK –​Handbooks of Linguistics and Communication Science), 55–​76. Berlin: De Gruyter Mouton. Sandler, Wendy & Diane Lillo-​ Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press. Sandler, Wendy, Irit Meir, Svetlana Dachkovsky, Carol Padden, & Mark Aronoff. 2011. The emergence of complexity in prosody and syntax. Lingua 121. 2014–​2033. Selkirk, Elisabeth. 2011. The syntax-​phonology interface. In John Goldsmith, Jason Riggle, & Alan Yu (eds.), The handbook of phonological theory (2nd edition), 435–​484. Oxford: Wiley-​Blackwell. Snedeker, Jesse & John Trueswell. 2003. Using prosody to avoid ambiguity:  Effects of speaker awareness and referential context. Journal of Memory and Language 48. 103–​130. Stone, Adam. 2017. Neural systems for infant sensitivity to phonological rhythmic-​ temporal patterning. Washington, DC: Gallaudet University PhD dissertation. Stone, Adam, Laura-​Ann Petitto, & Rain Bosworth. 2017. Visual sonority modulates infants’ attraction to sign language. Language Learning and Development 14(2). 130–​148. Stone, Christopher. 2009. Towards a Deaf translation norm. Washington, DC:  Gallaudet University Press. Swerts, Marc & Emiel Krahmer. 2005. Cognitive processing of audiovisual cues to prominence. Proceedings of Audiovisual Speech Processing (AVSP), 29–​30. British Columbia, Canada. Sze, Felix. 2008. Blinks and intonational phrases in Hong Kong Sign Language. In Josep Quer (ed.), Signs of the time. Selected papers from TISLR 2004, 83–​107. Hamburg: Signum. Tang, Gladys, Diane Brentari, Carolina González, & Felix Sze. 2010. Crosslinguistic variation in the use of prosodic cues: The case of blinks. In Diane Brentari (ed.), Sign languages, 519–​542. Cambridge: Cambridge University Press. Truckenbrodt, Hubert. 2012. The semantics of intonation. In Claudia Maienborn, Kai von Heusinger, & Paul Portner (eds.), Semantics:  An international handbook of natural language meaning, 2039–​2069. Berlin: De Gruyter Mouton. Tyrone, Martha E., Nam Hosung, Elliott Saltzman, Gaurav Mathur, & Louis Goldstein. 2010. Prosody and movement in American Sign Language:  A task-​dynamics approach. Conference Proceedings of Prosody 2010. Chicago, IL. Vaissiere, Jacqueline. 1983. Language-​independent prosodic features. In Anne Cutler & D. Robert Ladd (eds.), Prosody: Models and measurements, 53–​65. Berlin: Springer. van der Hulst, Harry. 1993. Units in the analysis of signs. Phonology 10(2). 209–​241. Wilbur, Ronnie. 1990. An experimental investigation of stressed sign production. International Journal of Sign Linguistics 1. 41–​59. Wilbur, Ronnie. 1994. Eyeblinks and ASL phrase structure. Sign Language Studies 84. 221–​240. Wilbur, Ronnie. 1997. A prosodic/​pragmatic explanation for word order variation in ASL with typological implications. In Kee Dong Lee, Eve Sweetser, & Marjolijn Verspoor (eds.), Lexical and syntactical constructions and the construction of meaning, 89–​ 104. Amsterdam:  John Benjamins.

93

94

Jordan Fenlon & Diane Brentari Wilbur, Ronnie. 1999a. A functional journey with a formal ending:  What do brow raises do in American Sign Language? In Michael Darnell, Edith Moravscik, Michael Noonan, Frederick Newmeyer, & Kathleen Wheatley (eds.), Functionalism and formalism, volume II: Case studies, 295–​313. Amsterdam: John Benjamins. Wilbur, Ronnie. 1999b. Stress in ASL:  Empirical evidence and linguistic issues. Language and Speech 42(2–​3). 229–​250. Wilbur, Ronnie. 2000. Phonological and prosodic layering of nonmanuals in American Sign Language. In Karen Emmorey & Harlan Lane (eds.), The signs of language revisited:  An anthology to honor Ursula Bellugi and Edward Klima, 215–​244. Mahwah, NJ: Lawrence Erlbaum. Wilbur, Ronnie. 2009. Effects of varying rate of signing on ASL manual signs and nonmanual markers. Language and Speech 52(2–​3). 245–​285. Wilbur, Ronnie & Cynthia Patschke. 1999. Syntactic correlates of brow raise in ASL. Sign Language & Linguistics 2(1). 3–​41. Wilbur, Ronnie & Lesa Petersen. 1997. Backwards signing in ASL syllable structure. Language and Speech 40. 63–​90. Wilbur, Ronnie & Brenda Schick. 1987. The effect of linguistic stress on sign movements in ASL. Language and Speech 30(4). 301–​323. Wilbur, Ronnie & H.N. Zelaznik. 1997. Kinematic correlates of stress and position in ASL. Paper presented at the Linguistic Society of America, Chicago.

94

95

5 VERB AGREEMENT Theoretical perspectives Josep Quer

5.1  Introduction To a greater or a lesser extent, all sign languages known to date have been shown to have ways to mark reference to arguments on verbal forms, either by linking the start and end points of the movement path of the verb, its hand orientation, or both, to the referential locations (also known as R-​loci) associated with those arguments. The phenomenon has been referred to as ‘directionality’ from the very start of sign language research (e.g., Fischer & Gough 1978). This kind of marking, which is realized in just a section of the lexicon of verbs (so-​called agreeing or agreement verbs), has been identified with grammatical agreement marking in a rather large part of the research. However, a competing view on its characterization has developed over the years that denies its characterization as agreement and rather considers it the blending of a gestural component with the lexical form of the verb (e.g., Liddell 2000, 2003; Schembri et al. 2018) and labels such verbs indicating verbs (from this perspective, forms gesturally point at or are oriented towards the locations where arguments are placed in real or surrogate space mapped from mental representations). This chapter offers a review of the scholarship that takes this type of argument encoding as grammatical agreement, and presents the formal analyses proposed and the arguments made to defend its linguistic nature by opposition to a gestural one.1

5.2  Properties of agreement in sign languages The general definition of agreement in natural languages establishes that it is the phenomenon whereby an element (the target) matches another one in some grammatical feature(s) triggered by the latter (the controller) within a certain domain (Steele 1978; Barlow & Ferguson 1988; Bickel & Nichols 2007). Typically, agreement is local within a clause, where the morphosyntactic (or semantic) features of arguments such as person, number, or gender can determine the morphological marks on the verb, which covaries with them. Many sign languages have been reported to display this phenomenon. In this section, different properties associated with it will be described and exemplified.

95

96

Josep Quer

5.2.1  Agreement markers The most prominent exponent of agreement in sign languages is the realization of the phonological feature of movement path of the verb, known as directionality: if a verb is specified for path movement, it has the potential to agree with the referential locations associated with its external and internal arguments. In grammatical terms, this has been interpreted in some accounts as agreement with the person feature of the subject and the object arguments encoded in the associated locations, which are also exploited by the pointing signs realizing pronouns (see Perniss, Chapter 17; Kuhn, Chapter  21).2 For a verb like G I V E -​P R E S E N T in Catalan Sign Language (LSC), two possible agreement patterns are illustrated in (1):  3G I V E -​P R E S E N T 1 (1a) and 1G I V E -​ P R E S E N T 2 (1b), illustrated in Figure  5.1a and 5.1b, respectively. Since this is a case of ditransitive predicate, the first referential location corresponds to the subject and the second one to the indirect object. Agreement marking also surfaces with transitive predicates, where the endpoint of the path agrees with the location associated with the direct object argument, as in (2). (1)

a.

3GIV E-​P RES EN T 1

‘She gave me a present.’ b.

1GIV E-​P RES EN T 2

‘I gave her a present.’                (2)

  (LSC)

1SUPPORT 3

‘I support her.’                 

(LSC)

a. 3GIVE-PRESENT1

b. 1GIVE-PRESENT2

Figure 5.1  Two agreeing forms of the verb GIVE -​P R ES EN T in LSC (© LSC Lab, Pompeu Fabra University)

A second marker of agreement is facing (of the fingertips, the palm or the whole hand) towards the object location. It often combines with path movement, as in (1), but it can be the agreement marker on its own as well, as in (3), illustrated in Figure 5.2. (3) 

1TAKE .C ARE 3

‘I take care of her.’               

  (LSC) 96

97

Verb agreement: theoretical perspectives

1TAKE-CARE3

Figure 5.2  Agreeing form of the verb TAKE-​C ARE in LSC (© LSC Lab, Pompeu Fabra University)

5.2.2  Verb classes and agreement Agreement markers do not appear on every verb form in sign languages. For American Sign Language (ASL), Padden (1983[1988]) established that verbs split in three morphological classes: (i) agreeing/​agreement verbs, which overtly agree with subject and object arguments; (ii) spatial verbs, which agree with locative arguments, and (iii) plain verbs, which have invariant forms and do not encode agreement. Typically, research has mostly concentrated on the analysis of agreement verbs, which have been illustrated above in (1)–​(3). Spatial verbs (ii) are inflected for their locative arguments, as in (4)–​(5) from LSC. Figures 5.3a and 5.3b illustrate the respective verb forms.3 Note that the verb in (4) inflects for the location of the goal argument, while in (5) it shows agreement with a body location that stands for the one of the intended referent. (4)

T OM ORROW IX 1 CH ILD BRIN G a S CH OOL IX a

‘Tomorrow I will bring my child to school.’ (5)

M ONT H PAS T IX 3 G ET-​S U RG ERY-​O N eye

‘Last week she got an eye-​surgery.’               

a. BRINGa

  (LSC)

b. GET-SURGERY-ONeye

Figure 5.3  Agreeing forms of the verbs B R ING and GET-​S U RG ERY-​O N in LSC (© LSC Lab, Pompeu Fabra University)

97

98

Josep Quer

Next to these two classes of inflecting verbs, sign language lexicons also include verbs that do not carry any kind of inflection, because they are specified for a location (on the body –​body-​ anchored signs –​or in signing space) at the lexical level, thus being unable to modify its form for path or orientation. Examples of this type of verbs in LSC are THINK in (6), articulated at the forehead, or PASS (7), articulated in signing space, as illustrated in Figures 5.4a and 5.4b. (6)

IX3 THINK ++

‘She keeps thinking.’ (7)

IX2 PASS S U RE

‘You will pass for sure.’             

a. THINK++

(LSC)

b. PASS

Figure 5.4  Forms of the verbs T HINK and PA SS in LSC (© LSC Lab, Pompeu Fabra University)

Within the class of agreement verbs that show path movement, a subclass is systematically identified where the path moves in the opposite direction, starting in the location of the object and ending in the location of the subject. This subtype of agreement verbs is called backwards and is exemplified with the LSC verbs CHOOSE and INVITE in (8) and (9) and illustrated in Figure 5.5. (8)

1C HOOSE 3

‘She chose me.’ (9)

2INV IT E 1

‘I invite you.’             

(LSC)

98

99

Verb agreement: theoretical perspectives

a. 1CHOOSE3

b. 2INVITE1

Figure 5.5  Agreeing forms of the verbs CHO O SE and I N V I TE in LSC (© LSC Lab, Pompeu Fabra University)

In addition, some plain verbs that are articulated in signing space have the possibility to realize agreement with a location associated with an argument. For example, the in principle invariable PAS S in LSC (7) can also be articulated at a certain location, as in (10) (Figure 5.6). In this example, the modified location of the verb denotes the subject argument, but it could also denote the object if the location referred to it. As will be reviewed below, the interpretation of this modification is split between those who do not consider it as integrating the morphological system of agreement and those that do. (10)

IX3 3PASS SU RE

‘She will pass for sure.’             

(LSC)

Figure 5.6  Agreeing form of the verbs PA SS in LSC (© LSC Lab, Pompeu Fabra University)

In sum, from a morphological point of view, sign language verbs can in principle be classified in three main types and two subclasses for agreement verbs. Plain verbs can also instantiate agreement with a single argument. Table 5.1 summarizes the possibilities.

99

100

Josep Quer Table 5.1  Classification of verbs according to their agreement possibilities Class

Subclass

Main exponent

Inflecting

Agreement verbs

Regular Backwards

Non-​inflecting

Spatial verbs Plain verbs

Subject to object path movement Object to subject path movement Source to goal path movement /​Location Potential agreement with a single location

5.2.3  Agreement auxiliaries For some sign languages, an auxiliary predicate has been described that encodes the agreement relations of the lexical predicate it co-​occurs with. Unlike typical spoken language auxiliaries, the ones instantiated in sign languages encode subject and object agreement, but not categories such as tense or mood, although in some sign languages some aspectual categories can be conveyed by the agreement auxiliary as well. In general, they combine with lexical plain verbs that do not carry agreement morphology, as illustrated in (11) from Argentine Sign Language (LSA), although they can co-​occur with agreement verbs (inflected or uninflected) under special circumstances (e.g., emphasis), as well as with backwards agreement verbs. It is important to note that in the latter case the movement path of the auxiliary is the regular one from subject to object locations, namely in the opposite direction to the one of the lexical backwards verb, as exemplified in (12) from LSC. (11)

J OHN 1 MARY 2 LOVE 1AU X 2

‘John loves Mary.’ (12)

(LSA, from Massone & Curiel 2004: 80)

3AUX 1 1IN VITE 3

‘She invited me.’

(LSC)

According to Steinbach & Pfau (2007), agreement auxiliaries can be grouped in three classes, based on their origins in the process of grammaticalization: (i) those that stem from the concatenation of two pronouns realized as a pointing sign moving from the subject location to the object one; (ii) those that have grammaticalized from non-​indexical nominals such as the noun PERS ON in German Sign Language (DGS); (iii) those that derive from predicates such as G IVE in Greek Sign Language (GSL), MEET in Taiwan Sign Language (TSL), or G O-​T O in Sign Language of the Netherlands (NGT). A sign language can have more than one agreement auxiliary, as in TSL or LSC. In the latter language, the auxiliary of type (ii), i.e., G IVE-​AUX , is based on the root of GIVE and has a more restricted distribution than the one of type (i), because it carries an additional semantic layer of causative meaning, and it characteristically appears with psychological predicates as in (13). Figure 5.7a illustrates type (i), and Figure 5.7b displays type (ii) in LSC.

100

101

Verb agreement: theoretical perspectives

        [da]

(13)

3GIV E-​AUX 1

AN N OY

‘That annoys me.’

(LSC)

a. 3AUX1

b. 3GIVE-AUX1

Figure 5.7  Agreeing auxiliaries AU X and GIVE -​AUX in LSC (© LSC Lab, Pompeu Fabra University)

Since early sign language research had not dealt with these grammatical markers, their description and analysis has added new empirical evidence for the analysis of agreement systems in sign languages.

5.2.4  Non-​manual agreement Next to the manual marking of agreement, for ASL non-​manual marking of agreement has been described and analyzed (Bahan 1996; Neidle et  al. 2000). The overt non-​ manual markers with transitive verbs are head tilt and eye gaze towards the location of the subject and object arguments, respectively, and they are argued to co-​appear with any type of verb, irrespective of the morphological ability to display manual agreement inflection. In line with their general analysis of non-​manual markers, Neidle et al. (2000) propose that they are the overt realization of functional heads, and in this case they originate in AgrSº and AgrOº and spread over the manual material in the c-​command domain of each projection, as represented in (14). Figure  5.8 shows that the head is tilted to the right of the signer, where the subject has been localized, and eye gaze is directed to the object location on the left. It is claimed that, given that AgrSº dominates AgrOº, the onset of head tilt is slightly earlier than that of eye gaze, despite their linear contiguity. With intransitives, both head tilt and eye gaze can mark agreement with the single argument.                                  

(14)

    head tilti   eye gazej

[+agri] AgrS [+agrj] AgrO LOVE ‘John loves Mary.’  J OHN i

MARY j

101

(ASL, adapted from Bahan 1996: 118)

102

Josep Quer

Figure 5.8  Non-​manual agreement markers in ASL (ASL, Bahan 1996: 120; © Ben Bahan, reproduced with permission)

This type of non-​manual agreement has only been described for ASL. Zwitserlood & van Gijn (2006) empirically discarded its existence in NGT. Some objections have been raised with respect to its characterization and analysis in ASL as well (for some descriptive and conceptual issues, see Sandler & Lillo-​Martin (2006: 42-​46)). From an experimental perspective, Thompson et  al. (2006) tested the validity of eye gaze as agreement marker in ASL with an eye tracking experiment, and the results did not support the generality of the analysis for all types of verbs (see Hosemann, Chapter 6, for details).

5.3  Theoretical analyses Two main theoretical approaches to agreement in sign languages can be identified in the scholarship devoted to the topic: one that ascribes the properties of agreement systems and agreement verbs to the thematic structure of predicates, and another one that treats agreement in purely syntactic terms. In addition, an alternative approach to the phenomenon in terms of clitic morphosyntax has also been pursued. In the next subsections, the main representative accounts of those lines of analysis are reviewed.4

5.3.1  Thematic approaches In line with Friedman’s (1975) early proposal, Meir (1998, 2002) offers a detailed account of verb classes and agreement in Israeli Sign Language (ISL) that takes thematic properties as the basis for the system. The basic assumption is that inflecting verbs morphologically incorporate a morpheme that denotes the path movement or trajectory traversed by a referent from a Source to a Goal. This is quite transparent for spatial predicates such as MOVE in the LSC example (15), where the movement path starts at the Source location and ends at the Goal location: (15)

IX 1 SON BRIS TOL a aMOVE b BARCELONA b

‘My son moved from Bristol to Barcelona.’

102

(LSC)

103

Verb agreement: theoretical perspectives

Meir argues that this type of marking is the instantiation of a directional morpheme DIR encoding the movement path. One central contribution in her system is the idea that the so-​called ‘agreement verbs’ are the result of the combination of this directional morpheme with the verbal roots, which are not inherently marked for agreement but rather incorporate the marking from DIR. They do not have spatial semantics, but verbs like GIVE , PAY or SEND in ISL semantically encode physical transfer, while TEACH , TELL or INFORM encode abstract transfer. Transfer is taken as the metaphorical type of motion that explains the combination of these verb roots with the directional morpheme. Plain verbs are defined as those that denote neither motion nor transfer. From this perspective, Padden’s verb classes are not primitive but derived from the restrictions imposed by their lexical semantics. Meir’s (1998, 2002) account is cast in Jackendoff’s (1997, 1990)  Lexical Conceptual Structures (LCS) representing the lexical-​semantic information and Predicate Argument Structures (PAS) encoding the argument-​taking properties of the predicate. Slots on the LCS are mapped into the level of PAS by means of linking rules, by which more prominent positions on the LCS are mapped into more prominent positions of the PAS. LCSs capture the predicate-​argument relations through two thematic relations, namely spatial (Source, Theme, Goal) and affectedness (Agent, Patient). Accordingly, they are structured in two tiers: the spatial tier and the action tier. In verbs of transfer the motion component is encoded on the spatial tier, whereas the affectedness relation is represented on the action tier, as illustrated in (16) for a predication like ‘I gave you the book’ (Meir 2002: 432). (16)

Lexical Conceptual Structure of Verbs of Transfer Spatial tier: Action tier:

CAUSE ([α], [GO ([BOOK]γ, [Path FROM [α] TO [β]])]) AFF ([I]‌α, [YOU]β)

As Meir (2002: 417) states, “[s]‌yntax does not figure prominently in the analysis” because the domain where the agreement relation obtains is the more general one between a head and its dependents, which is encoded in the lexicon. An important advantage of this analysis is that it generalizes not only over the two alleged classes of spatial and agreement verbs, but it is also able to account for the behavior of backwards agreement verbs without any additional explanation: the transfer movement in verbs like TAKE or C HOOSE is from Source to Goal, even if the path looks reversed from the point of view of syntactic arguments (it goes from object to subject). This is foreseen in the reversibility of the linking relations of the path arguments: FROM [α] TO [β] is the path specification for regular agreement verbs, and FROM [β] TO [α] the one for backwards agreement verbs. At the morphological level, Meir dissociates path marking from orientation marking and puts forth the idea that facing (orientation) of the hand in fact instantiates syntactic marking of the object argument. In this way, the independent mechanisms are brought under the following morphological principles (Meir 2002: 425): (17)

Principles of Sign Language Agreement Morphology a. The direction of the path movement of agreement verbs is from Source to Goal. b. The facing of the hand(s) is towards the object of the verb (whichever of Source or Goal is not subject).

103

104

Josep Quer

The dissociation of these two exponents is empirically supported by their independent behavior with different verbs types: while with regular and backward agreement verbs the path moves in opposite directions but always from the Source location to the Goal location, the hand remains oriented towards the object argument (the Goal with regular agreement verbs and the Source with backward agreement verbs). Bos (2017 [1998]), working within the same framework on NGT, makes a germane proposal to Meir’s, but with some differences. One main point of divergence in her analysis is that the initial location of the movement path is not always determined by the Source argument: it can also be determined by the Theme, as is often the case with backward agreement verbs like TAK E or FETCH . According to Bos (2017: 240), “[w]‌ith backwards verbs that have a Goal, Theme, and Source argument, agreement of the beginning point may shift from the Source to the Theme, when the Source argument is missing“. She interprets this as a modality effect, arguing that the interpretation or marking of Theme and Source is merged in space. This same fact was pointed out independently by Quadros & Quer (2008: 541) for LSC and Brazilian Sign Language (Libras), but not only with backwards verbs: both regular verbs like PRESS or BEAT and backward verbs like INV IT E , C HOOS E, or S U MMON are clear transitives. The thematic approaches to verb agreement in sign languages unify all types of inflecting verbs under spatial semantics, be it literal or abstract/​metaphorical as in the case of transfer predicates. Still, spatial verbs that select for a location argument, such as STAY in LSC (18) do not follow immediately from the general account, because no movement path is involved (for a discussion of single argument agreement, see Section 5.3.2.1 below). (18)

UNIV ERS ITY G ALLAU D ET S TAY a YEAR TWO

‘He stayed at Gallaudet University for two years.’

(LSC)

Padden (1983[1988], 1990), adopting a syntactic perspective (see Section 5.3.2.1 below for details), rejects a unified approach to spatial and agreement verbs on the basis of three main empirical arguments: (i) The interpretation of the movement path in the two classes of verbs is different: with spatial verbs, it is interpreted as actual movement between locations (19b), but with agreement verbs, the initial and final point of the path simply agree with the person features (R-​loci) of the respective arguments (19a). (19) a. 

1GIVE 2

‘I give you.’ b.

aC ARRY-​B Y-​H AN D b

‘I carry it from here to there.’ 

(ASL, Padden 1990: 124)

(ii) Distributive or exhaustive marking on an argument is interpreted differently depending on the verb type: with an agreement verb, it is interpreted as plural and the locations of the reduplicative morpheme are irrelevant (20a), whereas with a spatial verb, they are interpreted in a locative fashion (20b).

104

105

Verb agreement: theoretical perspectives

(20) a.

1GIV E 3++

‘I give it to (each of) them.’ b.

PUT a P U T b PU T c

‘I put them there, there and there.’ 

(ASL, Padden 1990: 125)

(iii) Reciprocal marking consisting in the simultaneous realization with both hands of the duplicated path in opposite directions only combines with agreement verbs to yield the reciprocal interpretation (21a). The same superficial type of marking applied to spatial verbs leads to a literal locative interpretation of the two opposite paths (21b). (21)

a.

/​ ‘They gave something to each other.’ b. aP UT b/​bPU T a ‘I put them in each other’s place.’  aGIV E b bG IVE a

(ASL, Padden 1990: 126)

Rathmann & Mathur (2008: 204) add further arguments to Padden’s distinction.5 For the interaction of verb class and reciprocal marking in DGS, see Pfau & Steinbach (2003). One of the main advantages of the thematic approaches is that they unify regular and backwards agreement verbs: within this type of analysis, the only difference between the two lies in the mapping of the Source and Goal arguments onto opposite slots on the affectedness tier, as explained above. However, the distinction needs to be specified at the lexical level in any case, and the idiosyncrasy of backward verbs is simply restated in their differing LCSs, which makes the account lose its claimed generality. Another apparent advantage of thematic approaches consists in the unification of spatial and agreement verbs based on the same explanation: both denote literal or abstract/​ metaphorical motion. To this it has been objected that in many cases of agreement verbs the denotation of a transfer relation has to be stretched to such a point that it loses explanatory force (Quadros & Quer 2008; Quer 2010; Pfau et al. 2018). Take, for instance, the transitive agreement verbs PRES S, BEAT, IN VITE , CHOOSE, or SUMMON in LSC and Libras already mentioned above, or the verb CHEAT in LSC:  it is hard to conceive of a transfer relation in the meaning of these predicates, without losing the core notion of transfer. In parallel, one would expect that the semantics of transfer would require the presence of agreement right away, but we know that the diachrony of verbs shows shifts from the plain class to the agreement class (e.g., PHONE and FAX in several sign languages, which started out as plain and became an agreement verb; see Meir (2012, 2016) on this type of change in ISL), but not the other way round. In addition, there are some lexical items that show agreeing and plain behavior within the same language, as is the case of BORROW in LSC (Quadros & Quer 2008: 548) or TRUST in DGS (Pfau et  al. 2018:  14). Moreover, if transfer semantics (and its concomitant morphological realization as a directional morpheme encoding the motion component of the transfer) were the factor determining verb class with respect to agreement, one would expect to be able to identify agreement verbs as a group cross-​linguistically. However, despite many overlaps in individual items across the lexicons of different sign languages (e.g., GIVE or TAKE ), the agreement behavior of lexical predicates cannot be predicted from the single

105

106

Josep Quer

semantic notion of transfer, even if the phonological specification as a body-​anchored sign is put aside. In addition, cases are also found of predicates that, despite sharing the same semantics across two different languages, have lexicalized directionality in opposite ways: ASK in LSC is backwards, while it is a regular agreement verb in Libras; ASK-​F OR is regular in LSC but backwards in LSC (Quadros & Quer 2008: 548). Pfau et al. (2018: 13–​ 14) also point out that there are agreement verbs such as EXPLAIN in DGS (and LSC) with a meaning of transfer that only mark it through orientation, and not with path, contrary to expectation under a thematic approach. Agreement auxiliaries did not feature in the first analyses of agreement because ASL does not have one, but research has shown that they are not rare across sign languages (see Section 5.2.3 above). This type of auxiliary contributes very valuable evidence for the analysis of agreement as a grammatical phenomenon. First, because they stem from the grammaticalization of agreement features exclusively and thus agree with the relevant person features of subject and object, and not Source-​Goal. This becomes clear when they co-​occur with a backwards agreement verb: while the lexical verb’s path moves from Source to Goal, the auxiliary moves in the opposite direction, namely from the subject to the object locations, as exemplified in (22) for LSC. (22)

IX 3a IX 3b 3aAU X 3b 3bTAK E 3a

‘She picked him up.’

(LSC, Quer 2011: 193)

This important piece of evidence was pointed out by Mathur (2000), Rathmann (2000), and Steinbach & Pfau (2007) for DGS, Smith (1990) for TSL, Bos (1994) for NGT, and Quadros & Quer (2008) for Libras and LSC; therefore, it turns out to be a rather robust property of sign language agreement auxiliaries. Its relevance lies in the fact that the auxiliary cannot be taken to instantiate the semantics of movement or transfer, since it has no lexical meaning to assign thematic roles or an underlying LCS. It is the pure instantiation of agreement based on syntactic functions, and at least it must be treated as such, irrespective of the treatment that agreement on lexical predicates receives (Quadros & Quer 2008; Quer 2010, 2011; Steinbach 2011). Quite significantly, agreement auxiliaries co-​occur with plain verbs, many of which do not have transfer or motion semantics. A relevant case would be that of most psychological predicates such as LIKE or BOTHER , which tend to be morphologically plain and stative in their basic denotation and involve theta-​roles other than Source and Goal. In addition, agreement can be used in DGS to extend the argument structure of single-​argument predicates like WAIT and LAUGH (Steinbach 2011: 215). Agreement auxiliaries also highlight an important property: they can only agree with [+animate] or [+person] arguments, as illustrated by the contrast in (23) adapted from Quer (2011:  193). This characteristic further underscores the fact that they cannot be reduced to the Source-​Goal analysis of agreement marking by path movement: while the verb TAKE can agree with an inanimate object, the co-​occurrence of an auxiliary agreeing with it renders the sentence ungrammatical. (23)

a.

B OOK x xTAK E 2

(*2AU X x) (Intended: ‘Pick up the book!’)

b.

C HILD 3 3TAK E 2 2AU X

‘Pick up the child!’ 

(LSC, Quer 2011: 193) 106

107

Verb agreement: theoretical perspectives

Despite the appealing simplicity of thematic approaches to sign language agreement, they face a number of empirical and analytical challenges, some of which have been addressed by alternative analyses with a syntactic basis. The most relevant ones are reviewed in the next subsection.

5.3.2  Syntactic approaches Under the label of syntactic approaches, a number of different analyses are grouped that share the assumption that sign language agreement is determined by syntactic properties like subject and object syntactic functions. Beyond this, the analyses differ significantly in implementation, theoretical assumptions, and level of detail. Three main groups of approaches will be reviewed here on the basis of some representative works: (i) the foundational ideas of the syntactic approach by Padden (1983[1988], 1990) and related contributions; (ii) minimalist analyses of agreement in different sign languages (Costello 2015; Pfau et al. 2018; Lourenço 2018); (iii) a reinterpretation of agreement marking as clitic (Nevins 2011).

5.3.2.1  Foundations of a syntactic approach The most influential piece of work for the scholarship on sign language agreement is probably Padden (1983[1988], 1990), focusing on ASL. The establishment of the three morphological classes described in Section 5.2.2 (plain, spatial, and agreement (= agreeing/​inflecting)) and the distinction between types of agreement (spatial vs. grammatical, see Section 5.3.1) has been replicated in virtually all languages that have been looked at for agreement. The essence of Padden’s analysis is that agreement in ASL is triggered by syntactic functions such as subject and object, and not by the thematic roles Source and Goal, as proposed early on by Friedman (1975) and later developed in the thematic approaches to agreement reviewed in Section 5.3.1. One of the main arguments in favor of a syntactic account is based on a generalization about the realization of the agreement pattern: while object marking seems obligatory, subject marking is optional. The comparison of regular and backwards agreement verbs provides the crucial evidence: the argument for which the marking can be omitted is the subject in both classes, namely the initial location of the path in regular agreement verbs and the final location in backwards verbs. A thematic approach is unable to state a generalization of subject marking omission for both classes, since, the omitted argument is the Source one with regular agreement (24), whereas it is the Goal one with backwards agreement (25). (24)

WOM AN G IVE 1 N EWS PAPER

‘The woman gave me a newspaper.’ (25)

a.

(ASL, Padden 1983[1988]: 137)

IX 1 3TAK E-​O U T FRIEN D S IS TER

‘I’m taking out my friend’s sister.’ b. * IX 1 TAK E-​O U T 1 FRIEN D S IS TER

107

(ASL, Padden 1983[1988]: 138)

108

Josep Quer

In her account, Padden (1990) left out of agreement proper cases of plain verbs that are articulated at a referential location (cf. (10) above), as in her example (26), where the ASL verb WANT is articulated at three different locations. According to her, the sentence is ambiguous between two interpretations, depending on whether the location is interpreted as the plural subject or the object argument, as reflected in the two possible translations of example (26). (26)

WOM AN WAN T 3a WAN T 3b WAN T 3c

‘The women are each wanting.’ ‘The woman wants this, and that and that one, too.’

(ASL, Padden 1990: 121)

Padden argues that these are not examples of plain verbs displaying agreement, but rather cases of verbs that contain pronoun clitics. The main argument for this analysis comes from the fact that this localization strategy is not exclusive to verbs and can be found with adjectives and nouns as well, as in (27). (27)

I SE E DOG a D OG b D OG c

‘I see a dog here, there and there, too.’

(ASL, Padden 1990: 122)

Low selectivity with respect to their host is taken to be a core property of clitics, as opposed to inflectional morphology like agreement. Cliticization, though, is limited to verbs that are not body-​anchored, and they can co-​appear with overt full pronouns. From this perspective, the distinction between plain and agreement verbs must be maintained, despite the surface similarities in the cases just discussed. Later syntactic analyses, though, will question this distinction and will subsume plain verbs articulated at a referential location as instances of single agreement (see Section 5.3.2.2). Engberg-​Pedersen (1993) also questions that cases like (26) are instances of agreement proper (semantic, in her view) and proposes instead that they are examples of pragmatic agreement, whereby the relation between the predicate and its argument is not specified and in particular it occurs when contrast is at play. Pragmatic agreement can only be marked for one argument and only by location, never by orientation, unlike agreement proper. Meir (1998: 99) also considered these cases as a discourse-​level phenomenon, not part of sentence-​level grammar.

5.3.2.2  Generative syntactic analyses Based on Padden’s foundational ideas, which were cast in a Relational Grammar framework, several pieces of work have worked them out in explicit and detailed analyses within generative syntax, especially within the Minimalist Program (Chomsky 1995, 2000). Albeit close in many respects, they differ in concrete assumptions and execution. Three of them will be reviewed in turn: Costello (2015), Pfau et al. (2018), and Lourenço (2018). For a more semantically oriented account, although grounded in a formal syntactic analysis, see Gökgöz (2013).

108

109

Verb agreement: theoretical perspectives

5.3.2.2.1. Costello (2015). One of the main departures of Costello’s (2015) proposal with respect to previous accounts lies in the assumption that it is not the person feature what drives agreement, but rather an ‘identity’ phi-​feature, analog to other phi-​features like person, number, and gender, but different from them: “the semantic distinction generates categories in the extreme, distinguishing one referent (in its own category) from another” (Costello 2015: 279). Identity comes close to the notion of referential index. Its morphophonological expression may be optional or alternatively result in a null form. The identity feature is argued to be hosted in Dº, the head of DP, and it is not inherent to a noun, just assigned to it. It may be realized manually as a pointing or non-​manually as eye gaze or head tilt. The point of departure is thus a shift in perspective by placing the burden of agreeing forms on the nominal identity feature across nouns and verbs. When no phonological material is inserted in D, the specific value of the identity feature is realized through the spatial marking of the noun itself, because of N moving to the D head as represented in (28) for the localized sign H OT E L in Spanish Sign Language (LSE) (Costello 2015: 289). (28)

Within this analysis, verb agreement arises through the valuation of the unvalued identity (and number) features of the v head: v probes into the identity (and number) feature of its internal argument, as goal of the Agree relation, and its values are copied onto the probe, after which V moves to the v head; after merging of T, this head probes into the features of the external argument in [Spec, vP] and its unvalued identity feature gets valued. After further movement of the V+v head to T, the verbal form will spell out the identity features of the subject and the object arguments as two different locations, resulting in the well-​known exponence of agreement. The working of agree for a verb like T RIC K in LSE are schematized in (29) (Costello 2015: 292).

109

110

Josep Quer

(29) 

For Costello (2015), agreement auxiliaries follow naturally from this analysis if it is assumed that the auxiliary is inserted in v and the lexical verb stays in situ: in this way, the auxiliary acquires the same agreement markings through the two Agree operations as just seen in the case of a lexical agreement verb. Given this framework, the cases of plain verbs showing agreement with a single argument (cf. Section 5.3.2.1) and analyzed as instances of cliticization and not of agreement in Padden (1990) are derived in Costello’s account by the same set of assumptions as cases of agreement and not as an instance of some other mechanism. Therefore, the verb PASS in LSE, which is plain, in a sentence like IX 1 EXAM x PASS x ‘I passed the exam’ can agree with its internal argument in the way illustrated in (30) (Costello 2015: 290):

110

111

Verb agreement: theoretical perspectives

(30)

Costello’s approach has the clear advantage over previous accounts of encompassing all the agreement phenomena in a unified way. His proposal states that syntactic agreement always takes place (in LSE, but arguably in other sign languages as well), even in cases where it does not result in its morphological manifestation. Absence of agreement marking might be due to two circumstances: (i) the identity feature takes a default value and no phonological expression of localization is encoded; (ii) the phonological specification of the verb blocks its modification for agreement marking. As for backwards verbs, the suggestion is made that they may involve a spatial mapping (or a metaphorical extension of such a mapping) that impacts on the form of the verb (rather than the fact that the arguments are locative). The underlying agreement process, however, remains the same for all types of argument. (Costello 2015: 294) This account also acknowledges the existence of pragmatic agreement for cases like (26), but does not consider it agreement proper, since it involves vague associations between different elements: similar locations are assigned to different identity values, but the association is not created in syntax, but rather at the phonological level and resolved in pragmatics.6 111

112

Josep Quer

5.3.2.2.2. Pfau et al. (2018). Pfau et al. (2018) developed another detailed syntactic analysis of agreement in DGS (extendible to other sign languages) within the Minimalism Program framework (Chomsky 2000 a.o.). It strictly takes agreement to be the result of the operation Agree, according to which agreement is a process that copies phi-​features from controllers (goals) onto agreement targets (probes). In the case of a two-​place predicate, the operation Agree applies twice: v, the head of vP, which has uninterpretable phi-​features, probes into the object argument, with interpretable features, and copies them; the T head, also endowed with uninterpretable features, probes into the subject DP and copies its valued features. The processes are represented in (31) (Pfau et al 2018: 19). (31)

Verb movement to v and further to T is taken to result in a complex head as the one represented in (32). Under the assumptions of Distributed Morphology, the roots and features that syntax works with are interpreted at Phonological Form, where Vocabulary Insertion takes place. In the case of (32), the exponents of agreement and the verb root are inserted, resulting in the usual path marking of subject and object agreement. (32)

This will be the linearized form in DGS, despite it being head final, which imposes a derivation of a regular agreement verb as in (33). Note that the notation [*v*] in the tree means that a verb has to enter Agree with v, [*T*] indicates that a head has to enter Agree with T.

112

113

Verb agreement: theoretical perspectives

(33)

Pfau et al. (2018) propose that v always has [*T*], so it has to move to the T head. Plain verbs, though, lack [*v*] and consequently do not head-​move to v. From this they derive the fact that plain verbs co-​occur with an agreement auxiliary that carries the agreement morphology. They draw the parallel with synthetic vs. analytic forms, where the agreeing lexical verb would correspond to the synthetic form and the plain verb together with the agreement auxiliary to the analytic form.7 As a further argument for the patterns of verb movement proposed, it is pointed out that negative headshake is co-​articulated with the complex head the agreement verb creates, while it only combines with the auxiliary co-​occurring with a plain verb. Backwards verbs also require a particular treatment in this account. As discussed earlier in Section 5.2.3, subject and object grammatical functions are not reversed, but the movement path is, since it goes from the object location to the subject location. The account resorts to Müller’s (2009) treatment of ergativity, which relies on the fact that the two operations that v engages in (Agree with the internal argument and Merge to introduce the external argument) are ordered in the opposite way to accusative syntax: in ergative alignment, Merge of the external argument precedes Agree with it, whereas in accusative alignment, Agree of v with the internal argument precedes Merge of the external one. The result of Merge before Agree with ergative alignment is reflected in (34): (34)

As a consequence of this, v probes into the subject valued features and copies their values onto its unvalued features. Next, T probes into the object and copies its features. In this way, the agreement features are distributed differently in backwards agreement verbs and give rise to the reverse movement path when linearized.

113

114

Josep Quer

Note that this account has to posit two different types of v, one occurring with accusative linearization and another one with ergative linearization, which must be coupled to regular agreement verbs and backwards verbs by some sort of selection. This is clearly a lexical property of individual items, whereas split ergativity systems are typically triggered by tense/​aspect or information structure properties. Even if there seem to be some instances of ergativity splits linked to lexical items, the question still remains why backwards verbs are a restricted class and some lexical items recur across sign languages, as noted in Meir (1998) for ISL and ASL, and in Quadros & Quer (2008) for Libras and LSC.8 A further drawback of this account of backwards verbs is that it cannot account for subject omission across regular and backwards verbs, unless an additional assumption is made concerning uniform case assignment to external arguments previous to Agree that an impoverishment rule can target (Pfau et al. 2018: 27). The authors also address the issue of agreement by orientation, which according to them cannot be treated as a case of multiple exponence, given that it is stable across regular and backwards agreement verbs in marking the object argument. They argue that this agreement marker constitutes an independent incomplete probe on v that is restricted to person: it triggers the first Agree operation probing into the object both with regular and backwards verbs, and that is why it behaves in the same way in both verb classes. Once this Agree operation with the object takes place, the rest of the derivation unfolds differently for regular than for backwards agreement verbs, as summarized above (for details, see Pfau et al. 2018: 28–​29). 5.3.2.2.3 Lourenço (2018). Focusing on the analysis of Libras, Lourenço (2018) partially departs from the received view by positing that what marks agreement in sign languages is not the directional movement path, but solely the matching of the initial and the final points of the verb with the locations of its arguments. He names this process co-​ localization. His definition of verb agreement is the one in (35). (35)

Verb agreement in sign languages: A verb shows agreement with its argument(s) when the location of the verb is changed in order to match the location of the argument(s), a process called co-​localization.

The relevant feature in agreement is [location], which is part of the phi-​feature bundle. Its semantic import amounts to semantic mapping between an entity and an abstract geometrical point. The [location] feature is not tied to particular nouns or entities but is inserted in the numeration as a discourse option. It will be merged on D during the computation, though. Unlike in the other accounts reviewed above, Lourenço (2018) assumes that the loci of phi-​probes are Cº (percolating down to the head of TP) and vº. Given that Libras shows syntactic asymmetries between agreement and plain verbs, it is argued that the differences are in fact present in the syntactic derivation. Typical plain verbs, namely those body-​ anchored, have a lexically valued specification for [location], namely [location val], and since they are valued for this feature, they cannot be modified. By contrast, agreement verbs potentially have two unvalued [location] features. The actual patterns found across verb classes depending on the number of Place of Articulation (PoA) features that can be specified in their Prosodic Feature representation are summarized in (36).

114

115

Verb agreement: theoretical perspectives

(36)

a. Double agreement verbs: Two underspecified slots for PoA –​ [location: _​_​_​]VERB [location:_​_​_​]

b. Single agreement verbs: One underspecified slot for PoA –​VERB [location:_​_​_​] c. Non-​agreeing verbs: No underspecified slot for PoA –​VERB Note that under single agreement verbs, Lourenço includes verbs that have a path with one location lexically specified and another one that can agree with a subject or an object location, but also verbs without path that can be articulated at a point in space: if their location feature is unvalued, they can agree with one argument (Padden’s clitic cases discussed above). Lourenço’s (2018) account of agreement is based on Baker’s (2008:  155) Case-​ Dependency of Agreement Parameter, which states that a functional head only agrees with a DP if this same head assigns Case to this DP. He assumes that in Libras this parameter is set positively and that it is a nominative-​accusative language, where these cases are assigned to subject and object, respectively. It follows that the first agreement slot of the verb will mark agreement with the nominative subject DP and the second agreement slot with the accusative object: the [location:_​_​] feature in T agrees with the nominative marked argument, and the [location:_​_​] feature in v agrees with the accusative marked argument, as represented in (37) (Lourenço 2018: 155). (37)

Since V in Libras in independently argued not to move to v and T, affix hopping is argued to take place and resolved at PF to give the actual agreeing form of the verb by spelling out the [location_​_​] values on it. In ditransitive constructions, the indirect object argument is taken to be merged above the direct object DP as the specifier of an ApplicativeP, and v is assumed to assign inherent dative case to it and structural accusative case to the direct object DP. Since the [location_​_​] feature on v will probe into the valued feature of its closest argument, agreement will be with the indirect object, as standardly observed for ditransitives across sign languages. This analysis also tackles the class of backwards verbs with an ergative analysis but based on the Case assignment assumptions (the Case-​Dependency of Agreement Parameter) adopted in the system (unlike Pfau et  al.’s (2018) account, based on the 115

116

Josep Quer

ordering of Merge and Move and of the subsequent Agree operations). Since the first agreement slot in backwards verbs is with the object, it is T that must assign structural nominative case to the object (for this to happen v must move to T in order to create a transparent domain –​namely, a larger vP phase–​for Case assignment by T to the object). As for the subject, it is argued to receive inherent ergative case: v assigns inherent ergative case to the external argument. Finally, Lourenço (2018) defends that what had been analyzed as an agreement auxiliary of the indexical type in Libras is in fact a topic marker indicating that subject and object have been moved to topic position. His main argument is that the argument nominals appearing before the auxiliary and the auxiliary itself are marked with eye gaze and raised eyebrows, two non-​manual markers that are typical for elements in topic position (cf. Kimmelman & Pfau, Chapter 26). In addition, it is in complementary distribution with a postnominal pointing index co-​appearing with the nominals in topic position. For these reasons, Lourenço does not consider it an auxiliary, but rather and indexical topic marker, to be glossed as xIX y. It is the spelling out of topics features in a functional head under CP, where the topic nominals are hosted as well. It is thus not connected to the agreement system, and consequently the direction of the path is always the same, irrespective of the verb type (regular or backwards agreement). One question that remains unanswered, though, is why the path must always move from the subject to the object location, whereas topic nominals can be in principle ordered freely, and the grammatical functions of the elements in A-​positions they are linked to are only determined at TP level. The path seems to be linked to the TP-​internal predication, and not simply to topic-​ marking. In any case, the account holds for Libras, but not for other sign languages where the agreement auxiliary does not occupy a topic position or bears the corresponding topic marking. Be that as it may, it raises the interesting question whether elements that look very similar on the surface across different sign languages can have very different morphosyntactic properties.

5.3.2.3  Clitic analysis Although Fischer (1975) already pointed to the phenomenon of agreement as cliticization of pronouns onto verbs, most research after that has taken for granted that the phenomenon at stake is actually agreement (with a few other exceptions like Keller (1998) or Barberà (2015)). However, Nevins (2011) has laid the ground for a clitic analysis on the basis of a number of morphosyntactic arguments stemming from the study of bona fide clitics in different spoken languages.9 For him, what has been so far thought of as agreement are in fact incorporated pronominal clitics. The properties of clitics Nevins focuses on are the following ones:  (i) they are prosodically weak pronouns; (ii) they can show distinctions for person only, and number can be dissociated from them; (iii) they may double NP arguments; (iv) clitic hosts show low selectivity; (v) the combinations of host and clitic are not interpreted as forming a paradigm, in contrast to agreement; (vi) clitics tend to ‘compete’ in clitic clusters (syntagmatic property); (vii) they do not vary with tense; (viii) subject clitic doubling may be optional or obligatory, unlike agreement. Nevins adheres to the view that sign language verb classes reduce to two, plain and non-​plain, as argued for by Quadros & Quer (2008). Plain and non-​plain verb classes would correspond to clitic hosting and non-​clitic hosting verbs. For him clitics include both locative and person clitics, so no distinction is made between spatial and agreement 116

117

Verb agreement: theoretical perspectives

verbs. As clitics, they are D heads coupled with their associated argument, in line with the big DP hypothesis. He argues that what have been considered odd properties of sign language agreement can be automatically accounted for under a clitic analysis. Among those properties, we find the following ones (for the full list and discussion, see Nevins (2011:  181)):  subject marking is optional; number marking can be dissociated from person marking; marking of the indirect object has preference over the direct object; locative marking exists. As Pfau et al. (2018: 17–​19) point out, some of the arguments brought up to support a clitic analysis against an agreement analysis are not decisive. First, optionality of subject marking can be reanalyzed as default marking of agreement, where the path starts at a default location. This sound reinterpretation of the facts leads to defending that under Preminger’s (2009)10 diagnostic to discern between agreement and clitics, sign language agreement falls on the agreement side, in contrast to Nevin’s conclusion. Second, there seems to be no person competition between markers of internal arguments, since indirect object marking always prevails over direct object marking, something that is also attested for agreement and furthermore can be explained by the higher hierarchical position of indirect object above the direct object. Third, tense is not morphologically marked in the vast majority of sign languages, so invariability with respect to tense cannot be used as a criterion. Moreover, it is unclear whether non-​finite domains exist in sign languages, thus weakening the argument of combinability of clitics with non-​finite verbs, as opposed to agreement. Fourth, Pfau et al. (2018) argue that the unification of spatial and agreement verbs cannot account for the different interpretation of path in the two classes. Fifth, the clitic analysis has nothing to say about the behavior of backwards agreement verbs, unlike the thematic analysis or the minimalist syntactic analysis by Pfau et al. (2018) and Lourenço (2018), because there is no evidence that the arguments project in an inverse order vis-​à-​vis the regular agreement pattern. Sixth, when combined with the verbal stem, clitics would phonologically assimilate completely to it, only contributing the location parameter. This situation seems quite unusual, given what is known about phonological assimilation processes in sign languages. Seventh, in order to obtain the right ordering of clitics and verbs, one would need additional machinery like verb raising. Lastly, there is no evidence of an intermediate stage of cliticization between pronominal signs and cliticized forms, either synchronically or diachronically, or for a sign language whose emergence has been documented from its origins like Nicaraguan Sign Language (NSL) (Senghas & Coppola 2001). To these objections, it can be added that if clitics are characterized by no or low selectivity by their hosts, it remains unexplained why they can only be hosted by agreement and spatial verbs and not by plain verbs. It could be counterargued that phonological specifications block cliticization because of body-​anchoring, but within the plain group, there are cases like the ASL verb WAN T that allow for localization at a referential location, thus arguably an instance of cliticization (Padden 1990; see Section 5.3.2.1 above). Note as well that agreement by hand orientation/​facing in combination with path or on its own is left out of the account.

5.4  Conclusion Formal approaches to agreement in sign languages, despite the different perspectives taken, have jointly contributed crucial and very rich evidence to support the linguistic status of this phenomenon. Experimental research on agreement (Hosemann, Chapter 6) 117

118

Josep Quer

opens further perspectives on it. The strength of the individual analyses is not undermined by the existence of empirical or methodological challenges, which every linguistic account faces. The theoretical tools used further reinforce the view that current theorizing on sign language grammars must benefit from the rich scholarship on agreement in a broad variety of spoken languages. The complexities that agreement by itself raises in both sign and spoken languages cannot but help better understand its properties and the range of variation of its instantiations vis-​à-​vis related phenomena. When languages of the manual-visual modality enter the empirical pool, we advance not only in understanding the impact of language modality but also the core properties of agreement in natural language tout court.

Acknowledgments The work on this chapter has been supported by the SIGN-​HUB project (2016-​2020), funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No 693349. I would also like to thank the Spanish Ministry of Economy, Industry, and Competitiveness and FEDER Funds (FFI2015-​68594-​P), as well as the Government of the Generalitat de Catalunya (2017 SGR 1478). I  am very grateful to Santiago Frigola for his support with the LSC illustrations.

Notes 1 For a good review of gestural approaches to agreement, see Pfau et al. (2018: 7–​11). 2 In morphophonological terms, the initial and final locations of the verb syllable are aligned with the corresponding argument locations (Sandler 1989). 3 Here we put aside classifiers as a particular type of predicates in the native lexicons of sign languages (for details, see Tang et al., Chapter 7) and focus on lexical spatial predicates that can potentially agree with arbitrary locations. 4 For formal approaches that take into consideration the gestural component of agreement and pronouns, see, for instance, Rathmann & Mathur (2008), Mathur & Rathmann (2012), and Lillo-​Martin & Meier (2011). 5 For arguments against Rathmann & Mathur’s (2008) evidence for the agreement/​spatial distinction, see Quadros & Quer (2008: 542–​543). 6 For these cases, Costello (2015: 297) proposes that the verb does not acquire its identity feature as with single argument agreement, but rather through a process of nominalization: instead of merging with a vP projection, it merges with an nP one that nominalizes it. The DP projection on top of it has a valued identity feature that will be pragmatically associated in a loose way with that of another DP. 7 The authors recognize that unlike in synthetic/​analytic splits in spoken languages, typically triggered by tense/​aspect or voice, the one in DGS is only determined by lexical factors (Pfau et al. 2018: 22). 8 Quadros & Quer (2008: 545–​546) propose that backwards verbs are a sort of hybrid, in the sense that due to their classifier origin they retain the path marking of spatial verbs, but can still agree with the person features of their arguments. 9 Nevins’ (2011: 183) interpretation of agreement marking as clitics refers to manual marking. In addition, he suggests that the non-​manual agreement postulated for ASL (see Section 2.4) might be a case of agreement proper. 10 In Nevin’s (2011: 184) rendition, Preminger’s (2009) diagnostic establishes that “in the environment where the element serving as the target for agreement can be moved past [a]‌clause boundary, and the agreement morpheme on the verb changes to default, the morpheme is an agreement marker; otherwise, it is a clitic (i.e., pronominal)”.

118

119

Verb agreement: theoretical perspectives

References Bahan, Benjamin. 1996. Non-​manual realization of agreement in ASL. Boston, MA:  Boston University PhD dissertation. Baker, Mark C. 2008. The syntax of agreement and concord. Cambridge: Cambridge University  Press. Barberà, Gemma. 2015. The meaning of space in sign language. Reference, specificity and structure in Catalan Sign Language discourse. Berlin: De Gruyter Mouton and Ishara Press. Barlow, Michael & Charles Ferguson (eds.). 1988. Agreement in natural language. Stanford, CA: CSLI. Bickel, Balthasar & Johanna Nichols. 2007. Inflectional morphology. In Timothy Shopen (ed.), Language typology and syntactic description, vol. III (2nd edition), 169–​ 240. Cambridge: Cambridge University Press. Bos, Heleen F. 1994. An auxiliary verb in Sign Language of the Netherlands. In Inger Ahlgren, Brita Bergman, & Mary Brennan (eds.), Perspectives on sign language structure. Durham: ISLA,  37–​53. Bos, Heleen. 2017[1998]. An analysis of main verb agreement and auxiliary agreement in NGT within the theory of Conceptual Semantics (Jackendoff 1990). Sign Language & Linguistics 20(2). 228–​252. Chomsky, Noam. 1995. The minimalist program. Cambridge, MA: MIT Press. Chomsky, Noam. 2000. Minimalist inquiries: The framework. In Roger Martin, David Michaels, & Juan Uriagereka (eds.), Step by step. Essays on minimalist syntax in honor of Howard Lasnik, 89–​156. Cambridge, MA: MIT Press. Costello, Brendan D.N. 2015. Language and modality: Effects of the use of space in the agreement system of lengua de signos española (Spanish Sign Language). Amsterdam:  University of Amsterdam and University of the Basque Country PhD dissertation. Engberg-​Pedersen, E. 1993. Space in Danish Sign Language. The semantics and morphosyntax of the use of space in a visual sign language. Hamburg: Signum Press. Fischer, Susan. 1975. Influences on word order change in American Sign Language. In Charles Li (ed.), Word order and word order change, 1–​25. Austin, TX: University of Texas Press. Fischer, Susan & Bonnie Gough. 1978. Verbs in American Sign Language. Sign Language Studies 18.  17–​48. Friedman, L.A. 1975. Space, time, and person reference in American Sign Language. Language 51. 940–​961. Gökgöz, Kadir. 2013. The nature of object marking in American Sign Language. West-​Lafayette, IN: Purdue University PhD dissertation. Jackendoff, Ray. 1997. The architecture of the language faculty. Cambridge, MA: MIT Press. Jackendoff, Ray. 1990. Semantic structures. Cambridge, MA: MIT Press. Keller, Jörg. 1998. Aspekte der Raumnutzung in der Deutschen Gebärdensprache. Hamburg: Signum. Liddell, Scott K. 2000. Blended spaces and deixis in sign language discourse. In David McNeill (ed.), Gesture and language, 331–​357. Cambridge: Cambridge University Press. Liddell, Scott K. 2003. Grammar, gesture, and meaning in American Sign Language. Cambridge: Cambridge University Press. Lillo-​Martin, Diane & Richard P. Meier. 2011. On the linguistic status of ‘agreement’ in sign languages. Theoretical Linguistics 37(3/​4). 95–​141. Lourenço, Guilherme. 2018. Verb agreement in Brazilian Sign Language:  Morphophonology, syntax and semantics. Belo Horizonte: Universidade Federal de Minas Gerais PhD dissertation. Massone, María Ignacia & Mónica Curiel. 2004. Sign order in Argentine Sign Language. Sign Language Studies 5(1). 63–​93. Mathur, Gaurav. 2000. Verb agreement as alignment in signed languages. Cambridge, MA: MIT PhD dissertation. Mathur, Gaurav & Christian Rathmann. 2012. Verb agreement. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 136–​157. Berlin: De Gruyter Mouton. Meir, Irit. 1998. Thematic structure and verb agreement in Israeli Sign Language. Jerusalem: Hebrew University of Jerusalem PhD dissertation. Meir, Irit. 2002. A cross-​modality perspective on verb agreement. Natural Language and Linguistic Theory 20. 413–​450.

119

120

Josep Quer Meir, Irit. 2012. The evolution of verb classes and verb agreement in sign languages. Theoretical Linguistics 38(1/​2). 145–​152. Meir, Irit. 2016. Grammaticalization is not the full story:  A non-​grammaticalization account of the emergence of sign language agreement morphemes. MMM10 Online Proceedings. 112–​124. Müller, Gereon. 2009. Ergativity, accusativity, and the order of Merge and Agree. In Kleanthes K. Grohmann (ed.), Explorations of phase theory. Features and arguments, 269–​308. Berlin: Mouton de Gruyter. Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan & Robert G. Lee. 2000. The syntax of American Sign Language. Functional categories and hierarchical structure. Cambridge, MA: MIT Press. Nevins, Andrew. 2011. Prospects and challenges for a clitic analysis of (A)SL agreement. Theoretical Linguistics 37. 173–​187. Padden, Carol A. 1983 [1988]. Interaction of morphology and syntax in American Sign Language. New York/​London: Garland Publishing. Padden, Carol A. 1990. The relation between space and grammar in ASL verb morphology. In Ceil Lucas (ed.), Sign language research:  Theoretical issues, 118–​132. Washington, DC:  Gallaudet University Press. Pfau, Roland & Markus Steinbach. 2003. Optimal reciprocals in German Sign Language. Sign Language & Linguistics 6(1). 3–​42. Pfau, Roland, Martin Salzmann, & Markus Steinbach. 2018. The syntax of sign language agreement: Common ingredients, but unusual recipe. Glossa 3(1): 107–​146. Preminger, Omer. 2009. Breaking agreements:  Distinguishing agreement and clitic doubling by their failures. Linguistic Inquiry 40. 619–​666. Quadros, Ronice M. de & Josep Quer. 2008. Back to backwards and moving on: On agreement, auxiliaries and verb classes in sign languages. In Ronice M. de Quadros (ed.), Sign languages: spinning and unraveling the past, present and future. TISLR9, forty-​five papers and three posters from the 9th Theoretical Issues in Sign Language Research Conference, Florianópolis, Brazil, December 2006, 530–​551. Petrópolis/​RJ: Editora Arara Azul. Quer, Josep. 2010. Signed agreement: Putting the arguments together. Presentation at the conference Theoretical Issues in Sign Language Research 10. Purdue University (USA). Quer, Josep. 2011. When agreeing to disagree is not enough: Further arguments for the linguistic status of sign language agreement. Theoretical Linguistics 37(3/​4). 189–​196. Rathmann, Christian. 2000. Does the presence of a person agreement marker predict word order in sign languages? Presentation at the 7th International Conference on Theoretical Issues in Sign Language Research, University of Amsterdam. Rathmann, Christian & Gaurav Mathur. 2008. Verb agreement as a linguistic innovation in signed languages. In Josep Quer (ed.), Signs of the time, 191–​216. Seedorf: Signum Verlag. Sandler, Wendy. 1989. Phonological representation of the sign. Linearity and nonlinearity in American Sign Language. Dordrecht: Foris. Sandler, Wendy & Diane Lillo-​ Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press. Schembri, Adam, Jordan Fenlon, & Kearsy Cormier. 2018. Indicating verbs as typologically unique constructions: Reconsidering verb ‘agreement’ in sign languages. Glossa 3(1). 89. Senghas, Ann & Marie Coppola. 2001. Children creating language: How Nicaraguan Sign Language acquired a spatial grammar. Psychological Science 12(4). 323–​328. Smith, Wayne. 1990. Evidence for auxiliaries in Taiwanese Sign Language. In Susan Fischer & Patricia Siple (eds.), Theoretical issues in sign language research, Vol 1:  Linguistics, 211–​228. Chicago: University of Chicago Press. Steele, Susan. 1978. Word order variation: a typological study. In Joseph H. Greenberg, Charles A. Ferguson, & Edith A. Moravcsik (eds.), Universals of human language: IV: Syntax, 585–​623. Stanford: Stanford University Press. Steinbach, Markus. 2011. What do agreement auxiliaries reveal about the grammar of sign language agreement? Theoretical Linguistics 37. 209–​221. Steinbach, Markus & Roland Pfau. 2007. Grammaticalization of auxiliaries in sign languages. In Pamela Perniss, Roland Pfau, & Markus Steinbach (eds.), Visible variation: Comparative studies on sign language structure, 303–​339. Berlin: Mouton de Gruyter.

120

121

Verb agreement: theoretical perspectives Thompson, Robin, Karen Emmorey, & Robert Kluender. 2006. The relation between eye gaze and verb agreement in American Sign Language: An eye-​tracking study. Natural Language & Linguistic Theory 24. 571–​604. Zwitserlood, Inge & Ingeborg van Gijn. 2006. Agreement phenomena in Sign Language of the Netherlands. In Peter Ackema, Patrick Brandt, Maaike Schoorlemmer, & Fred Weerman (eds.), Arguments and agreement, 195–​229. Oxford: Oxford University Press.

121

122

6 VERB AGREEMENT Experimental perspectives Jana Hosemann

6.1  Introduction Imagine two people in a room, one passing a cup of coffee to the other and asking for sugar. From a psycholinguistic perspective, to describe this situation correctly, a speaker or signer must be able to unequivocally report who is performing the action and who is undergoing or receiving the action, i.e., who is passing the sugar to whom. Together with word order and morphological case, verb agreement is one way in which natural languages can identify the agent and the patient, respectively the syntactic subject and object, of a proposition. In spoken languages, agreement is established by a concatenation of lexical verb stems and agreement morphemes that mark syntactic features such as person and number, as in the sentence ‘Kim passes Jonah the sugar’. In some spoken languages, agreement is also marked by tone or voicing (Hansson 2004; Palancar & Léonard 2016), but generally verb generally agreement in spoken languages is realized by affixation. In contrast, what is called ‘agreement’ in sign languages is realized by an overlap of location in signing space. In other words, agreement verbs generally realize subject and object agreement (i.e., double agreement) by moving from the location associated with the subject towards the location associated with the object. Figure 6.1 depicts sign language verb agreement conveyed by a change in hand orientation (a), by path movement (b), or by both (c) (see Quer, Chapter 5, for theoretical analyses of this phenomenon).

122

123

Verb agreement: experimental perspectives

(a) DGS: EXPLAIN

(b) DGS: GIVE

(c) DGS: CRITIQUE

Figure 6.1  Pictures of the German Sign Language (DGS) verbs EX P LA I N ((a), orientation change only), G I V E ((b), movement change only) and C R I TI QU E ((c), movement and orientation change). The verb 1E X P L A IN 2 marks 1st person subject and 2nd person object (top) and vice versa (bottom). The verb 1CR IT IQUE 3a marks 1st person subject and 3rd person object (top) and vice versa (bottom); 3aGIVE 3b marks two 3rd person referents (© Jana Hosemann 2015)

The fact that agreement verbs, spatial verbs, and index signs (pronouns or demonstratives) can be spatially modified and directed to different loci in signing space is unique to sign languages. The linguistic status of this phenomenon has been subject to an extensive discussion among sign language linguists. One outcome of this debate is, e.g., a special edition of Theoretical Linguistics on the linguistic status of agreement (Krifka & Gärtner 2011; for further comments see Wilbur (2013)). The discussion is dominated by two major lines of thought, each preferring a different term for the same phenomenon: one approach, which prefers the term verb agreement, argues that underlying morphosyntactic principles give rise to this phenomenon. According to this line of argumentation, agreement in sign languages also realizes grammatical features of the verb’s syntactic arguments. Similar to agreement in spoken languages  –​which can be described as the copying of grammatical features from the noun phrase to the verb  –​ agreement in sign languages is the copying of location features of the verb’s arguments to the verb’s beginning and end point. Consequently, agreement verbs in sign languages have an underspecified beginning and end point that is established by the referential loci of their arguments, and thereby also realize person and number features (Lillo-​Martin & Meier 2011; Quer 2011; Rathmann & Mathur 2011; Quer, Chapter  5). In contrast, the other approach often uses the term indicating verbs and argues that the phenomenon is not based on morphosyntactic principles, but that it is rather a partially gestural phenomenon (Liddell 2000; Cysouw 2011; Schembri et  al. 2018). According to this reasoning, directing a verb towards the location of a referent is seen as “gestural pointing [that] is overlaid on the lexical structure of these signs” (Liddell 2011: 168). As a theoretical consequence, the phenomenon would be a combination of morphemes and pointing gestures, and not the realization of a verbs’ arguments. However, in the literature the terms ‘agreement (verbs)’ and ‘indicating (verbs)’ are complemented by the third 123

124

Jana Hosemann

term ‘directionality’. With a theoretically more neutral perspective, directionality refers to the change in path movement or palm facing direction of the sign. I will use the words ‘agreement’ and ‘directionality’ interchangeably, to describe this sign language specific phenomenon. However, using the term ‘agreement’ puts the focus more on the linguistic nature of the phenomenon, whereas ‘directionality’ highlights the manual aspects of a direction change. Note that the phenomenon is often discussed with reference to verbs, but also affects pronouns (‘he/​she’ etc.) and demonstratives (‘this/​that’). Thinking about directionality in sign languages from a diachronic perspective, it is noteworthy that sign languages are comparatively young languages. Agreement systems originate from a set of directional gestures in emerging sign languages (such as home sign systems) and develop into a linguistic system as sign languages mature (Meier 2002). Thus, directionality in emerging sign languages appears to be gestural, but directionality in mature sign languages becomes linguistic. The arguments substantiating the morphosyntactic analysis of directionality in mature sign languages are numerous and highly convincing from a theoretical perspective (see Quer, Chapter 5). For example, Meier (2002) lists a set of arguments –​such as verb specifications, grammaticalization processes, or agreement marking auxiliaries for plain verbs –​which show how directionality in mature sign languages follows language-​specific principles, and can be neither arbitrary nor gestural. As a basic approach to the following investigations of directionality in mature sign languages, the phenomenon is understood as linguistic, and specifically morphosyntactic, in nature. However, even within the morphosyntactic explanation of directionality, there seems to be an intuition that agreement systems in sign languages are not fully equivalent to agreement systems in spoken languages. This intuition comes from the fact that agreement systems in sign languages are established via location matching between subject/​object and verb, whose phonological form is influenced by contextual properties. The concrete location associated with subject or object, and thus the concrete location an agreement verb or pronoun is directed to, can be influenced by the actual position or height of a present referent. Thus, the phonological realization of the pronoun or agreement verb appears to be context-​dependent and not determined. This aspect of sign language agreement clearly constitutes a modality-​specific characteristic. In the present chapter, I follow two objectives: first, I summarize empirical and experimental investigations of sign language agreement and underline the resulting arguments for the morphosyntactic explanation. Second, although I  follow the morphosyntactic approach, I  additionally want to draw attention to the modality-​specific aspects of sign language agreement, which evidently set it apart from spoken language agreement systems. Section 6.2 provides an overview of the acquisition processes of sign language agreement and shows parallels with the acquisition of spoken language agreement. Section 6.3 reports psycholinguistic offline experiments on sign language agreement that measure the evaluation or production of agreement. Section 6.4 presents neurolinguistic event-​related potential online experiments on the processing of agreement and agreement violations. The chapter closes with a short conclusion in Section 6.5.

6.2  The acquisition of verb agreement Acquiring a first language is an incredibly complex challenge:  for hearing children, it starts with an indistinct sound stream, while for profoundly deaf children it starts with unintelligible movements of arms, hands, bodies, and faces. At the end, both deaf and hearing children have acquired in a very similar manner a lexicon with approximately 124

125

Verb agreement: experimental perspectives

15,000 lexeme entries and a highly complex set of grammatical rules how to combine those lexemes (Meier 1991). For hearing children, the acquisition of verb morphology highly depends on the morphological complexity of their target language: if a language has a complex verbal morphology, such as, for example, Spanish or Italian, children master the correctly inflected forms around the age of 2;0-​2;3 (i.e., from two years and zero months to two years and three months; Guasti 1992). Instead, for children acquiring a language with a poor verb morphology, such as English, the correct acquisition takes longer and is fully mastered only around the age of 3;0 (e.g., Brown 1973). For deaf children, the acquisition of directionality with agreement verbs and with personal pronouns presents the challenge of differentiating between phonological path movements with no further meaning from path movements of inflected agreement verbs that mark subject and object. Furthermore, deaf children need to learn the difference between a locus in signing space that refers to a present referent and a locus in signing space that refers to the abstract concept of a non-​present referent. These referential loci are called R-​loci (see detailed description in Quer, Chapter 5). The current section presents three studies on the native acquisition of agreement in sign languages. Most studies on the acquisition of sign language agreement have focused on American Sign Language (ASL) and the production of person agreement with present referents (Hoffmeister 1978; Meier 1982; Loew 1984; Newport & Meier 1985; Lillo-​Martin 1986; Casey 2003). Other investigations have studied agreement acquisition in other sign languages such as British Sign Language (BSL; Morgan et al. 2002, 2006); DGS (Hänel 2005a); Hong Kong Sign Language (HKSL; Tang et al. 2008); Brazilian Sign Language (Libras; De Quadros 1997); Italian Sign Language (LIS; Pizzuto 2002); and Sign Language of the Netherlands (NGT; van den Bogaerde & Baker 1996). These longitudinal studies typically investigated data of two to four deaf children who acquired a sign language from their deaf parents. One consistent result is that these deaf children mastered the agreement system with present referents at an approximate age of 3;0. In this process, children usually start by moving their hands in a gestural manner to point to objects and referents. In comparing directional signs with directional gestures, Casey (2003) observed that 95% of the gestures were directional from a very early age. In contrast, children between the age of 1;6 to 2;11 produced only 35% of potentially directional verbs in a directional agreeing manner. Hence, children acquire directionality in gestures earlier than in agreement verbs, although the physical forms are very similar. This points to the fact that acquiring directionality in agreement verbs requires a further abstract conceptual step, which is examined in the studies described in the following paragraphs. Meier (2002) presents data from three congenitally deaf children learning ASL from their deaf signing parents. Their ages during data collection ranged from 1;6 to 3;9. On a monthly basis, spontaneous conversations between the child and a parent were recorded (data based on Meier (1982)). Since the directional movement between the subject and the object often represents the notion of transfer (either literal or abstract) between agent and patient, directionality appears to be iconic (cf. Meir 1998). This iconic mapping between the form and its meaning, as well as the spatial analogy between the referents’ position and the hand movement, might be the essential elements used by children to acquire directionality (as predicted by the mimetic or the spatial analogy model). In contrast, the morphological model is based on grammatical principles. The predictions made by this model are confirmed by Meier (2002): (i) verb agreement is acquired relatively late; (ii) children prefer single agreement (over double agreement) because it is morphosemantically more simple; (iii) children omit agreement forms even in contexts that require an agreement 125

126

Jana Hosemann

form, again because they are morphologically more simple. Although children are sensitive to the optionality of double agreement versus single agreement in ASL, they omit subject agreement much more frequently than object agreement (see Meier 1982). Thus, before mastering the directing of verbs from the location of the subject towards the location of the object, children omit subject agreement and only direct verbs towards the location of the object. These findings provide evidence that children learn the grammatical rule of agreement step-​by-​step. In contrast, if the acquisition of verb agreement was based on copying the iconic form from their parents’ input, one would expect that children produce more double agreement than single agreement. Therefore, Meier (2002) argues that verb agreement is part of the morphosyntax of sign languages. Similarly, Hänel (2005a) conducted a longitudinal study with two deaf children (2;2 to 3;2 and 2;2 to 3;4) who learned DGS from their deaf parents. The results, also presented in Hänel (2005b), revealed two critical phases in the acquisition of verb agreement. In phase 1 (until the mean age of 2;5), both children used less agreement verbs than plain verbs and were inconsistent in marking the subject or the object. Instead, agreement verbs were produced in the ‘citation form’ (i.e., for most signs this involves the sign starting at or near the signer’s body and moving away from the body towards a neutral central location in the signing space). The marking of non-​present subjects or objects was very rare, which indicates that these children had not yet fully acquired the morphosyntactic concept of R-​loci, i.e., the agreement mechanism. In phase 2 (from the mean age of 2;5 until approximately 3;3), both children productively used subject agreement and established non-​present subject referents. Furthermore, they productively used non-​1st person object agreement. Interestingly, the children produced ‘reversal errors’, i.e., they produced the citation form 1H ELP N, when in fact expressing that the mother helps her (the child), but this type of error was observed only in phase 1. In phase 2, in contrast, one child overgeneralized the agreement rule, dropping an overt subject and marking a plain verb for the location of the subject (SCHWIMMEN 3 for ‘he is swimming’; Hänel 2005b: 220). In general, in phase 2, both children showed an increased use of agreement verbs and an increased use of subject pronouns. The apparent difference between phase 1 and phase 2 indicates a learning step, in which both children acquired a grammatical principle. Although both children showed default agreement forms already in phase 1, the morphosyntactic principles, which are considered essential for agreement, were not yet acquired. Thus, both children fully acquired the agreement principle, i.e., copying of R-​loci features, only in phase 2. Most morphosyntactic accounts of spatial agreement in sign languages assume that the process involves features that encode the grammatical category of person. For a discussion on the differentiation between 1st person versus non-​1st person, or 1st, 2nd, and 3rd person marking in sign languages, see Lillo-​Martin & Meier (2011), Quer (2011), Rathmann & Mathur (2011), and Wilbur (2013). For an alternative explanation in terms of an ‘identity’ feature instead of a person feature, see Costello (2016) (see also Quer, Chapter 5, for more details). Petitto (1987) investigated the acquisition of the personal pronouns YOU and ME in ASL by two deaf children of deaf parents. Pronouns are pointing signs towards the addressee, another referent, or towards oneself. Their phonological form is identical to pointing gestures, which hearing and deaf children begin to use from the age of 0;9. The visual iconic characteristics of pronouns in sign languages (i.e., pointing towards the addressee means ‘you’, pointing towards one’s own chest means ‘I’) would suggest a very early understanding of this mechanism and no confusion by children. However, Petitto (1987) provides evidence for more complex stages in acquiring the correct use of pronouns. 126

127

Verb agreement: experimental perspectives

After their first pointing gestures towards objects, locations, and persons, from around the age of 0;11, deaf children have a long avoidance period during which the use of pointing is reduced. When they return to pointing towards themselves and others, children often make reversal errors: for example, they point away from themselves to mean ‘I’. Similar to hearing children, who also tend to make reversal errors with pronouns (Clark 1978; Chiat 1982), deaf children have full control of the pronouns YOU and ME only around the age of 2;1 to 2;3. If the use of directionality in personal pronouns were purely gestural, children should neither display a gap in their usage, nor go through a phase of reversal mistakes. Reversal mistakes of pronouns highlight the change in meaning from being a proper name towards becoming an indexical. These results therefore indicate that directionality in personal pronouns is also acquired as a grammatical rule. The findings of these three studies provide evidence that in becoming proficient in the correct use of directionality, children acquire a morphosyntactic rule. However, other studies on the acquisition of directionality point out that directionality is not grammatically obligatory. These studies argue for an analysis that is not based on morphosyntax. Since agreement verbs inflect for person and number features, Hou (2013) investigated the acquisition of plural marking in agreement verbs. Eleven third-​generation deaf children (3;4 to 5;11) and a control group of 11 adult deaf native signers performed an elicitation task. Participants had to describe video clips with actions that elicited the plural forms of the directional verbs G IVE , S H OW, and TAKE . The results of the child group led to the conclusion that, in general, the acquisition of plural marking in verb agreement is protracted past the age of 5;0. Children did not use the plural markings as often and as systematically as the adult control group. The verbs’ complex argument structure with multiple agents and patients and the various options for marking plural referents (exhaustive, multiple, simultaneous or sequential reciprocal, classifiers, one-​handed or two-​handed dual marking, etc.) seem to result in an extended period of development. In contrast to the adult group, which also omitted plural marking of agreement verbs, children did not use alternative strategies to refer to the plural referents. Owing to this optionality of plural marking on agreement verbs, Hou (2013) argues that verb agreement is not grammatically obligatory. Similarly, Quadros & Lillo-​Martin (2007) also criticize previous studies on the acquisition of sign language verb agreement and provide results for children using emblems and conventionalized gestures with directionality, which seems to open the door for an alternative view on agreement. However, the authors present data for children at a younger age, compared to the above-​mentioned studies: their recordings stopped at an age level between 2;3 and 2;5/​2;8, whereas previous studies showed that children only acquire the agreement system past 2;5 (cf. Hänel 2005b). Hence, Quadros & Lillo-​Martin’s (2007) results are based on an age period during which children usually do not fully understand the agreement system and therefore produce fewer agreement verbs. In summary, directionality towards present referents and objects originates in gestural or non-​grammatical principles, as can be seen in emerging sign languages (Meier 2002). But once directionality is productively used with non-​present referents, there must be a grammatical principle underlying it (Loew 1984). Thus, the difference between directionality towards present and non-​present referents marks a crucial step in language development. Compared to spoken languages, the acquisition of the agreement system in sign languages is completed later –​around 3;0 for the latter and around 2;0 for the former. Agreement with non-​present referents is acquired even past the age of 3;0. This is due to the fact that agreement in sign languages involves a more complex interaction between 127

128

Jana Hosemann

semantics and morphosyntax: children have to learn which verbs agree and, which do not, and the argument structure of each verb –​single agreement, double agreement, or backwards agreement. Up to now, there is no evidence that iconicity and the apparent spatial transparency of directionality would facilitate agreement acquisition. The data on agreement acquisition provide evidence for a systematic learning process involving linguistic principles, such as the indexical use of pronouns and the morphosyntactic use of single-​versus double agreement. In contrast, if the agreement system were non-​linguistic in nature, one would predict an earlier and more coherent use of pointing gestures towards people, and an earlier and more coherent usage of directionality.

6.3  Verb agreement tested with offline methods Human language competence can only be studied in indirect ways by observing the way humans understand a particular utterance, or by observing the utterances humans produce. Thus, psycholinguistic experiments either investigate the processing of language input in perception studies, in which participants perceive a language input. Or, they investigate the language production of the participant by eliciting certain sentences in production studies. Perception studies are based on the observation that processing a language chunk takes time. This process includes the following steps: the visual or auditory language input is perceived by the sensory organs and processed in the respective brain regions. If participants are given a certain task, for example a judgment task, they then make a decision. After this decision, a motor-​sensory signal is sent from the brain to an executive body part (the hand, finger, or mouth). The participant’s response can be measured. If the measured signal is an indirect behavioral response by the participant, for example, the pressing of a button, the experimental method is called offline, because the processing part is measured indirectly through a reaction time or the accuracy of a decision. In contrast, if the measured signal reflects the actual processing phase of the critical item (for example, in functional magnetic resonance imaging (fMRI) or in electroencephalography (EEG)), the method is called online. In contrast to perception studies, production studies are based upon the assumption that linguistic rules emerge in the language we produce. In the following two sections, I will present psycholinguistic experiments that investigated sign language agreement in offline experiments (Section 6.3.1, reaction time studies), in production experiments (Section 6.3.2, eye tracking studies), and in online experiments (Section 6.4, EEG studies).

6.3.1  Agreement tested in reaction time studies One of the first priming studies that aimed at investigating the organization of morphologically complex signs in the mental lexicon was conducted by Emmorey (1991). Studies with hearing participants revealed that the base form of a verb and a morphological variant of that verb are closely organized in the mental lexicon. The participants’ lexical decisions on a verb’s base form (the target) were facilitated when they previously saw a morphological variant of that verb (the prime). This was explained by a spread of activation from prime to target. Inflected verb forms and verb base forms share a lexical representation, which is repeatedly activated by the prime and the target (Fowler et al. 1985). These findings could be accounted for in a so-​called ‘satellite model’ of the organization of lexical entries in the mental lexicon:  the verb’s lexical base form represents the nucleus, which is circled by its morphologically related variants. Emmorey (1991) 128

129

Verb agreement: experimental perspectives

conducted the first repetition priming study on ASL, which asked whether morphological priming is a modality-​independent process, that is, whether ASL agreement verbs and their morphological variants are also organized in satellites. This study investigated the organization and recognition of morphologically complex forms, i.e., inflected agreement verbs, such as 1ASK2 or ASKdual, and aspectual forms, such as ASKhabitual, in comparison to the verb’s base form ASKbase. Fourteen deaf native signers with deaf parents participated in this study. They saw prime-​target sign pairs and had to decide whether the target was a sign or non-​sign (lexical decision task). Target signs were verb base forms (ASKbase), which were preceded by (i) the same verb base form, (ii) a verb form inflected for agreement (1ASK2 or ASKdual), or (iii) verb form inflected for aspect (ASKhabitual). If reaction times to the ASKbase form presented after the inflected forms, i.e., condition (ii) and (iii), were similar to condition (i), these results would confirm the satellite structure of lexeme entries in ASL. Indeed, the results confirmed that target signs preceded by the identical verb base form elicit significantly faster reaction times. However, when the prime was an inflected agreeing form, there was no significant priming effect. This is rather surprising, because it indicates that forms inflected for agreement, such as 1ASK 2 or ASK dual, are not closely linked to the verb base form in the mental lexicon. Another finding was that aspect morphology elicits a stronger priming effect than agreement morphology. Emmorey (1991) interprets these results as indicating that the rule which works for all verbs (i.e., aspect) creates a closer relation to the lexical base form than the rule which only works for some verbs (i.e., agreement). The results from Emmorey (1991) suggest that base forms of agreement verbs and their inflected forms are not closely organized. Compared to spoken language verbs and their inflected forms, sign language verb forms might be semantically more ‘heavy’, which leads to a reduced priming effect. This indicates a modality difference between sign language agreement systems and spoken language agreement systems. Another offline study on agreement processing, in contrast, provides evidence for the core-​linguistic nature of sign language agreement. Boudreault & Mayberry (2006) investigated the age of acquisition effect in relation to syntactic processing in deaf native signers. They asked participants to detect errors in the structure of ASL sentences and tested three different groups of ASL learners: deaf native signers who learned ASL from their deaf parents (native L1 learners); deaf signers who were exposed to ASL at school from the age of 5–​7 (early L1 learners); deaf signers exposed to ASL at school from the age of 8–​13 following a period at an oralist school (late L1 learners). The stimulus material included grammatical and ungrammatical examples of six syntactic structures, including negative sentences, wh-​questions, relative clauses, sentences with classifier constructions, and sentences with agreement verbs. Participants were asked to detect the incorrect sentences, while the accuracy and reaction times of their responses were measured. The results showed that the age of acquisition effect was apparent for all ASL sentence structures. Late learners were less accurate and had longer reaction times compared to native learners. For example, sentences with agreement verbs were judged correctly approximately 95% by native learners, but only about 65–​75% by early and late learners. The results from Boudreault & Mayberry (2006) indicate that syntactic structures –​ including the agreement system  –​are more fundamentally internalized when acquired under typical conditions of language acquisition from birth. Thus, despite the uncertainty about the lexical organization of verbs in sign languages, agreement follows linguistic principles. The question of which articulators participate in the agreement process is yet another issue. As described in Section 6.1, agreement is expressed manually by hand orientation and/​or direction of path movement. However, a number of eye tracking 129

130

Jana Hosemann

studies have also looked at the role of a non-manual component, namely eye gaze, in the expression of directionality (Thompson 2006; Thompson et al. 2006, 2009, 2013).

6.3.2  Agreement tested in eye tracking studies The claim that eye gaze is a syntactic marker of agreement was first made by Bahan (1996) and Neidle et  al. (2000). Based on consultation with deaf native signers, the authors claimed that subject agreement is further marked by a head tilt towards the location associated with the subject; while object agreement is obligatorily marked by an eye gaze towards the location associated with the object. Since –​according to their syntactic analysis  –​this non-manual agreement marking is the result of a feature checking mechanism in the agreement phrase, it appears with all three verb types, including transitive plain verbs. Thompson et al. (2006; based on data from Thompson 2006)  disproved Neidle et  al.’s claim by conducting an eye tracking experiment. In this storytelling production experiment, the authors investigated participants’ eye movements during verb production. Participants were asked to invent a story about the two characters ‘Jack’ and ‘Jill’ and had to include certain given verbs (plain, agreement, spatial, and backwards verbs). Participants’ eye movements were recorded during the whole time with a head-​mounted eye tracking device that records the scene (i.e., what the signer sees) and the signers’ eye movements within this scene. With this methodology, the authors were able to investigate the direction of the signers’ looks during their storytelling and, in particular, during their verb production. The results showed that there is a strong correlation between manual agreement with a referents’ location and the eye gaze towards that location. In 73.8% of agreement verbs, participants gazed towards the location of the object (for backwards verbs, participants even directed their gaze towards the syntactic object in 82.5% of the cases). Also, in 72.22% of spatial verbs, participants gazed towards the location of the locative argument. Thus, for those verbs that mark directionality manually, Thompson et al. (2006) found a simultaneous eye gaze towards the syntactic object/​locative argument of the verb in approximately three-​fourths of the verb occurrences. In contrast, eye gaze during the production of plain verbs was significantly different:  during plain verbs, participants directed their gaze towards the object in only 11.1% of the cases (in 40.71% gaze was directed towards the addressee, while in 44.88% it was directed towards any ‘other direction’). Interestingly, according to the authors, the direction of gaze during verb production is not driven by other discourse regulating constraints, such as turn taking signals or attention checking mechanisms. Indeed, there is a close correlation between marking directionality manually and looking towards the syntactic object (for agreement verbs) or the locative argument (for spatial verbs). For a theoretical explanation, the authors argue that eye gaze during verb production in ASL is part of the grammatical principle that marks agreement manually. They claim that eye gaze marks the lowest accessible argument within a noun phrase accessibility hierarchy. A comparable investigation conducted by Hosemann (2011) reveals similar effects in DGS. The results show that participants’ eye gaze was directed towards the final position of the verb, which coincided with the location of the object, in approximately two-​thirds of verbs with manual agreement. Eye gaze during verbs that did not display any manual agreement were distributed towards the addressee or towards any ‘other direction’. An analysis of the scope of eye gaze during verb production showed that the duration of eye gaze towards the object/​locative coincided with the duration of the verb 130

131

Verb agreement: experimental perspectives

production. This is a strong argument for the correlation between moving the eyes and moving the hands. Interestingly, non-manual agreement marking via eye gaze, as just described, can be learned. Thompson et al. (2009) showed in two follow-​up experiments that marking the object with eye gaze becomes increasingly systematic with more ASL competence. Proficient L2 learners with more than 11  years of ASL experience showed a similar eye gaze pattern as native signers. They also marked the syntactic object of agreement verbs and the locative argument of spatial verbs in approximately 70% of verb instances. However, they also showed non-manual agreement marking with plain verbs, which clearly diverged from native signers. For plain verbs, proficient L2 signers directed their gaze towards the object also in approximately 70%, while native signers only directed their gaze towards the object in about 11% of the time. Nevertheless, compared to novice L2 learners of ASL (maximally 15 months of ASL instruction) and non-​signers, the eye gaze agreement pattern of proficient L2 learners followed linguistic structure. The eye gaze pattern for novice L2 learners and non-​signers was random across all verb types. The studies by Thompson et  al. provide ground-​breaking work in showing that directionality in sign languages is not only a linguistically driven phenomenon at the manual level, it also encompasses non-manual eye gaze. This is a phenomenon unique to sign languages, which reinforces the intuition that agreement in sign languages is different from agreement in spoken languages. Nevertheless, the studies by Thompson et al. also provide evidence that agreement marking by eye gaze is grammatically motivated. Offline experiments on sign language agreement offer a twofold picture: on the one hand, directionality is systematic and anchored in grammatical principles. On the other hand, directionality cannot be considered equivalent to agreement systems in spoken languages. Online experiments can further contribute to the picture by investigating sign language agreement at the moment of processing.

6.4  Verb agreement tested with online methods In contrast to other psycholinguistic methods, like reaction time measurements, evaluation tasks, or production tasks, neurolinguistic methods reflect semantic or syntactic language processing directly at the moment when it occurs. Imaging methods, like fMRI, aim at producing detailed pictures of the processing brain with a high spatial but low temporal resolution. In contrast, the EEG methodology provides detailed information on minimal neural activity with a high temporal but low spatial resolution. The event-​related potential paradigm (ERP) measures the EEG response to a particular visual or auditory stimulus. Here, the summed neural activity that is needed to process an unexpected or complex item is compared to the neural activity that is needed to process an expected or less complex item. The unexpected occurrence of a certain word  –​either as single word or within a sentence context –​results in an electrophysiological potential (hence, event-​related potential). The time of the potentials’ peak, its polarity (whether it is more negative or more positive compared to the potential of the expected event) and its distribution over the scalp can be interpreted with respect to a functional explanation. Thus, in neurolinguistic investigations in (spoken) languages conducted over the last few decades, researchers observed different potentials that can be associated with different steps of sentence processing. The main distinction lays between semantic and syntactic processing. Semantic processing is typically associated with the N400 potential –​a centrally distributed negativity peaking around 400 ms after the event onset. Syntactic processing 131

132

Jana Hosemann

is typically associated with a late positivity or P600 –​a more posterior distributed positivity approximately 600–​800 ms after event onset (Kutas et al. 2006). Continued investigation of agreement violations in spoken languages led to the picture that morphosyntactic agreement violation in person or number results in a biphasic electrophysiological response:  an early left anterior negativity followed by a rightly distributed late positivity (Molinaro et al. 2011). A few studies on sign languages tested the agreement violation paradigm, in order to replicate a biphasic ERP response to agreement violation in sign languages. If a sign language sentence with an agreement violation would elicit a biphasic ERP response, this would strongly indicate that directionality is a morphosyntactic process similar to agreement in spoken languages.

6.4.1  ERP studies on sign language agreement –​a morphosyntactic analysis Ever since Kutas & Hillyard (1983), morphosyntactic agreement violation in spoken languages has been extensively investigated in electrophysiological studies. Many of these studies investigated number agreement violation between full subject noun phrases and the verb, as in “The elected officials hopes* to succeed” (Osterhout & Mobley 1995: 742); or between a pronominal subject and a verb, as in “Every Monday, he mow* the lawn” (Coulson et al. 1998: 33). Agreement mismatch in spoken languages typically evokes a biphasic ERP pattern with a left anterior negativity (LAN) between 300–​400 ms and a late positivity (P600/​SPS) around 500 ms after stimulus onset. Testing agreement mismatches in a sign language, Capek et  al. (2009) conducted a study on the processing of morphosyntactic violations in ASL sentences. They presented videos with ASL sentences that were either morphosyntactically correct or incorrect. The correct sentences included two referents (either two 3rd person referents or one 3rd person referent and the signer) and an agreement verb moving correctly from the R-​ locus associated with the subject to the R-​locus associated with the object. The incorrect sentences were identical to their correct counterparts, but included an incorrectly produced agreement verb. This ‘incorrect sentence’ condition included two kinds of violation: first, reversed verb agreement violation, and second, unspecified verb agreement violation (Capek et al. 2009: 8785). In the reversed verb agreement condition, the verb moved from object to subject location (instead of vice versa). In contrast, in the unspecified agreement violation, the verb moved from the position of the subject towards an unspecified locus that was not associated with the object or any other referent. The sentence in (1) displays the original stimulus example from Capek et al. (2009: 8785), including reversed and unspecified verb agreement violation.1 (1)

M Y NEW CAR BLACK CL 3a PRO 1 MU S T 1WASH3a /​ 3aWASH1 /​ 1WASH3b EVERY WEEK 3a.

‘My new car is black. I have to wash it every week.’ (ASL, adapted from Capek et al. 2009: 8785) Note that ASL is an SVO language, so that the critical verb precedes the object sign. In the correct condition, PRO 1 MUST 1WASH3a EVERY WEEK 3a, the critical verb moves from the subject location to the object location. In the reversed verb agreement violation condition, the verb moves from the R-​locus of the object towards the R-​locus of the subject (CAR CL 3a … PRO 1 MUST 3aWASH1). In the unspecified verb agreement violation condition, the verb moves from the R-​locus of the subject towards an unspecified R-​locus opposite to the assigned object R-​locus (CAR CL 3a … PRO 1 MUST 1WASH3b). Interestingly, Capek et al. (2009) found a 132

133

Verb agreement: experimental perspectives

difference in the ERP results with respect to the two different morphosyntactic violations. Both agreement violation conditions elicited an early anterior negativity (140–​200 ms post sign onset in reversed agreement violation and 200–​360 ms in unspecified agreement violation), followed by a late posterior positivity (P600) in the time windows 475–​1200 ms and 425–​1200 ms post sign onset, respectively. However, Capek and colleagues report that the anterior negativity differed between violations in its laterality. It was largest over the left lateral anterior site in the reversed agreement condition, while it was largest over the right lateral frontal site in the unspecified agreement condition. In order to explain this difference in ERP responses, they point out that there are different demands on the system in processing unspecified agreement violation versus reverse agreement violation. In unspecified agreement violation, the agreement verb identifies an R-​locus at which no referent had previously been located. Thus, participants interpreted the unspecified R-​locus either as referring to a new referent, which might be introduced later in context, or as referring to the previously introduced referent located at a different R-​locus. This seems to be a distinct processing procedure compared to reverse agreement violation, in which the verb moves from the location of the object towards the location of the subject, instead of vice versa. In this case, both loci have been established in the previous discourse. The difference in ERP responses indicates that the interpretation of agreement in sign languages highly depends on the different associations between R-​loci and referents. Despite that, Capek et al. (2009) claim that agreement violation in sign languages elicits a biphasic ERP response, which strongly indicates that directionality is a morphosyntactic process. A second study investigating verb agreement violation during sentence processing was conducted by Hänel-​Faulhaber et al. (2014) for DGS. They also presented sentences with two referents and an agreement verb. In contrast to Capek et al.’s study, here the referents were 3rd person referents in all instances, which were overtly assigned to locus 3a (referent A) and locus 3b (referent B). Sign order in the sentences was consistently SOV. Thus, the critical verb was produced after subject and object were established. Example (2) represents an original stimulus sentence by Hänel-​Faulhaber et al. (2014: 7) in the correct condition.2 (2)

B OY POINT 3a G IRL POIN T 3b 3aNEEDLE3b REASON POINT 3b SLOW SWIM.

‘The boy needles the girl because she is slowly swimming.’                               (DGS, Hänel-​Faulhaber et al. 2014: 7) The verb agreement violation condition in this study crucially differed from the agreement violation conditions in Capek et  al. (2009). While Capek et  al. investigated reverse agreement violations and unspecified agreement violations, Hänel-​Faulhaber et al. (2014) presented the incorrectly inflected agreement verb beginning at an unspecified neutral R-​locus in front of the signer (here indicated by subscript ‘3c’) and moving towards the location of the signer:  BOY POIN T 3a G IRL POIN T 3b 3cNEEDLE1 […]. This is a crucial difference because neither the initial nor the final hold of the agreement verb overlap with one of the R-​loci associated with subject and object. Measured EEG responses to the critical verb showed for the agreement violation condition a negative potential with a left lateralized frontal distribution (LAN) at 400–​600 ms and a late positivity with a posterior distribution (P600) at 1000–​1300 ms post sign onset. Both studies provide evidence that verb agreement mismatches in sign languages –​i.e., violations in R-​loci –​are processed similar to spoken language agreement mismatches –​ i.e., violations in person and number features. This substantiates the claim that both agreement systems are based on morphosyntactic principles. Although these results 133

134

Jana Hosemann

indicate that agreement in sign languages is a morphosyntactic phenomenon, the design of both studies shows that the violation of person agreement in sign languages can manifest in several types, which seem to elicit distinct ERP responses: an agreement violation related to an ‘empty’ 3rd person R-​locus elicited a greater anterior negativity at the right hemispheric site compared to a reverse agreement violation or unrelated agreement violation as in Hänel-​Faulhaber et al. (2014). This aspect is further discussed in the following ERP study on agreement violation with yet different results.

6.4.2  ERP studies on sign language agreement –​an alternative analysis In two ERP experiments, both presented in Hosemann (2015) and Hosemann et  al. (2018), the authors investigated agreement violations in DGS. The first experiment was conducted similarly to the ERP experiments reported in Section 6.4.1 and investigated agreement violation with agreement verbs. However, the second experiment accessed new ground and investigated agreement violation with plain verbs. Both experiments resulted in different ERP responses compared to those described in Section 6.4.1, which supports the assumption that agreement in sign languages is not an exclusively morphosyntactic phenomenon. Experiment A –​agreement violation with agreement verbs –​was designed in a similar way to Capek et al.’s (2009) and Hänel-​Faulhaber et al.’s (2014) studies. Participants saw videos with a DGS sentence about the signer and a 3rd person referent. The first sentence established the referents and associated the 3rd person referent with an R-​locus on the ipsilateral side of the signer. In the correct condition, the agreement verb in the second sentence was correctly inflected and moved from the subject (i.e., the signer) to the location of the object (location 3a on the ipsilateral side of the signer). In the incorrect condition, the agreement verb moved from the location of the signer towards the location 3b on the contralateral side of the signer, which was referentially ‘empty’. This type of agreement violation is comparable to Capek et al.’s unspecified verb agreement violation. Example (3) shows an original stimulus sentence. (3)

/​ 1INFORM3b. ‘My father is a soccer fan. I will inform him /​xxx about the date of the next match.’                     (DGS, Hosemann et al. 2018: 1110) M Y FAT HER IX 3a S OCCER FAN. N EXT MATCH DATE 1INFORM3a

In contrast to agreement verbs, plain verbs are lexically fully specified and cannot change their path movement according to the location of the object. Thus, manipulating plain verbs such that the path movement extends towards the location of a 3rd person object constitutes a clear violation of the ‘agreement principle’. In the second experiment on agreement violation with plain verbs we did exactly this. Plain verbs such as BUY with a lexically specified downward movement were signed with an extended path movement towards the location 3a associated with the object, in order to mark artificial 3rd person agreement in the incorrect condition, compare (4). (4)

IX 1 LAPT OP BUY3a.

‘I *buy a laptop.’                 

(*DGS, Hosemann et al. 2018: 1110)

Note that this sentence is semantically acceptable but in DGS incorrect in its phonological form. The authors therefore expected to find a clear morphosyntactic violation for sentences 134

135

Verb agreement: experimental perspectives

like (4). The analysis of the ERP signal in these two experiments diverged from Capek et  al.’s analysis in one important respect. Instead of presenting spliced video sequences, the authors of this study presented continuous, unspliced videos and included the transition phase leading to the onset of the crucial sign in their analysis. Based on findings in Hosemann et al. (2013), the authors analyzed different trigger positions relative to different phonological cues within the transition phase prior to the sign onset. These were: (i) the sign onset, defined as the first frame of the initial hold; (ii) non-manual cues, defined as the first moment in which any non-manual cue(s) indicated the target location of the signs’ path movement. Following Thompson et al.’s (2006) findings, in both experiments 3rd person object agreement was also marked by eye gaze. Interestingly, the eye gaze towards the final location of the verb preceded the sign onset by approximately 202 ms. The ERP results for agreement violation in agreement verbs and agreement violation in plain verbs differed from each other, and also differed from the result of the previous ERP experiments described in Section 6.4.1. Incorrect agreement verbs (1INFORM 3b) elicited two effects that seem to be unrelated: a right posterior positive deflection between 220–​570 ms relative to trigger non-manual cues preceded a left anterior effect between 300–​600 ms relative to trigger sign onset. The left anterior effect seems to be a positivity for the correct condition and therefore a task-​related effect from the P300 family. In contrast, incorrect plain verbs (BUY 3a) elicited a broadly distributed positive deflection around 470–​820 ms after the non-manual cue trigger. None of the agreement violations resulted in a biphasic pattern of LAN and late positivity. The different ERP patterns in the results suggest a different interpretation of incorrect agreement verbs versus incorrect plain verbs. Whereas incorrect agreement verbs are deviant because of a context-inappropriate path movement and/​or palm orientation, these forms can in principle be appropriate in other contexts. Since path movements as well as the initial and final hold of an agreement verb provide semantic information that is evaluated within the context, the incorrect form of an agreement verb might trigger a reinterpretation of the sentences. The agreement violation 1INFORM3b ending in a referentially ‘empty’ locus could thus describe a violation of information structure. This violation could be motivated, e.g., by an unexpected topic shift or the introduction of a new discourse referent. In contrast, this is not the case for incorrect plain verbs. The incorrect form of plain verbs, as constructed in the present experiment (BUY3a), is not appropriate in any context and is thus more difficult –​if not impossible –​to reinterpret. Hence, these sentences will be understood as wrong or not well-​formed. Recapitulating the results from Sections 6.4.1 and 6.4.2, we can conclude that different experiments with different agreement violation paradigms resulted in different ERP responses. Since the different types of agreement violations with agreement verbs (reversed versus unspecified versus incoherent agreement violation) may have led to different reinterpretations of the sentence content, the ERP responses that were found in each study could have been caused by different cognitive processes. Thus, they should not be interpreted as the result of a single morphosyntactic agreement violation. Especially for sign language agreement, it is essential to take into account the semantic aspects of directionality, which are inseparable from the morphosyntactic aspects.

6.5  Conclusion The present chapter provided evidence for a morphosyntactic understanding of sign language directionality, based on a broad experimental perspective. Sign language verb agreement is acquired in a systematic manner with similar developmental steps compared 135

136

Jana Hosemann

to those observed in spoken language agreement acquisition; for example, reversal errors, overgeneralizations, and the use of pronouns as proper names. Priming and eye tracking studies, as well as online ERP experiments provided additional evidence for the linguistic status of agreement. Nevertheless, some acquisition studies, priming studies and EEG studies also pointed to the fact that agreement in sign languages is not equivocal to agreement in spoken languages. The enriched semantic aspects of the verbs’ movements, and their associated loci at the beginning and the end, indicate that directionality in sign languages is a phenomenon at the interface of morphosyntax, semantics, and pragmatics.

Notes 1 English translation by the author of this chapter. In the original example, R-​loci in signing space were represented with letters: ‘a’ for the signer (here subscript 1), ‘e’ for the right side of the signer (here subscript 3a), and ‘c’ for the left side of the signer (here subscript 3b). ‘C L 3a’ is a classifier construction, locating the antecedent (i.e., ‘CA R ’) at the R-​locus right to the signer; ‘P RO 1’ is the first-​person pronoun. 2 PO I N T is an index sign locating a referent at a specific R-​locus. For coherent labeling reasons, I  changed the original coding of subscript a (left to the signer) and subscript b (right to the signer) into ‘3a’ and ‘3b’, respectively.

References Bahan, Benjamin J. 1996. Non-​manual realization of agreement in American Sign Language. Boston, MA: Boston University PhD dissertation. Boudreault, Patrick & Rachel I. Mayberry. 2006. Grammatical processing in American Sign Language: Age of first-​language acquisition effects in relation to syntactic structure. Language and Cognitive Processes 21(5). 608–​635. Brown, Roger. 1973. A first language: the early stages. Cambridge, MA: Harvard University Press. Capek, Cheryl, Giordana Grossi, Aaron J. Newman, Susan. L. McBurney, David Corina, Brigitte Roeder, & Helen J. Neville. 2009. Brain systems mediating semantic and syntactic processing in deaf native signers: Biological invariance and modality specificity. Proceedings of the National Academy of Sciences of the United States of America (PNAS) 106(21). 8784–​8789. Casey, Shannon. 2003. ‘Agreement’ in gestures and signed languages: The use of directionality to indicate referents involved in action. San Diego, CA: University of California PhD dissertation. Chiat, Shulamuth. 1982. If I were you and you were me: The analysis of pronouns in a pronoun-​ reversing child. Journal of Child Language 9(2). 359–​379. Clark, Eve. 1978. From gesture to word: On the natural history of deixis in language acquisition. In Jerome S. Bruner & Alison Garton (eds.), Human growth and development: Wolfson College lectures 1976, 85–​120. Oxford: Clarendon Press. Costello, Brendan. 2016. Language and modality:  Effects of the use of space in the agreement system of lenguade signos española (Spanish Sign Language). Amsterdam: University of Amsterdam PhD dissertation. Utrecht: LOT. Coulson, Seana, Jonathan W. King, & Marta Kutas. 1998. Expect the unexpected: Event-​related brain response to morphosyntactic violations. Language and Cognitive Processes 13(1).  21–​58. Cysouw, Michael. 2011. Very atypical agreement indeed. Theoretical Linguistics 37(3–​4). 153–​160. Emmorey, Karen. 1991. Repetition priming with aspect and agreement morphology in American Sign Language. Journal of Psycholinguistic Research 20(5). 365–​388. Fowler, Carol, Shirley Napps, & Laurie Feldman. 1985. Relations among regularly and irregularly morphologically related words in the lexicon as revealed by repetition priming. Memory and Language 13. 241–​255. Guasti, Maria T. 1992. Verb syntax in Italian child grammar. Geneva Generative Papers 1–​2. 115–​122. Hänel, Barbara. 2005a. Der Erwerb der Deutschen Gebärdensprache als Erstsprache. Die frühkindliche Sprachentwicklung von Subjekt-​und Objektverbkongruenz in DGS. Tübingen: Narr.

136

137

Verb agreement: experimental perspectives Hänel, Barbara. 2005b. The acquisition of agreement in DGS: Early steps into a spatially expressed syntax. In Helen Leuninger & Daniela Happ (eds.), Gebärdensprachen:  Struktur, Erwerb, Verwendung. (Linguistische Berichte Special Issue 15), 201–​232. Hamburg: Buske. Hänel-​Faulhaber, Barbara, Nils Skotara, Monique Kügow, Uta Salden, Davide Bottari, & Brigitte Röder. 2014. ERP correlates of German Sign Language processing in deaf native signers. BMC Neuroscience 15(62). 1–​11. Hansson, Gunnar Ó. 2004. Tone and voicing agreement in Yabem. In Benjamin Schmeiser, Vineeta Chand, Ann Kelleher & Angelo J. Rodriguez (eds.), West Coast Conference on Formal Linguistics 23 Proceedings, 318–​331. Somerville, MA: Cascadilla Press. Hoffmeister, Robert. 1978. The development of demonstrative pronouns, locatives, and personal pronouns in the acquisition of American Sign Language by deaf children of deaf parents. Minneapolis, MN: University of Minnesota PhD dissertation. Hosemann, Jana. 2011. Eye gaze and verb agreement in German Sign Language. A first glance. Sign Language & Linguistics 14. 76–​93. Hosemann, Jana. 2015. The processing of German Sign Language sentences. Three event-related potential studies on phonological, morpho-​syntactic, and semantic aspects. Göttingen: Georg-​ August University PhD dissertation. Hosemann, Jana, Annika Herrmann, Holger Sennhenn-​Reulen, Matthias Schlesewsky, & Markus Steinbach. 2018. Agreement or no agreement. ERP correlates of verb agreement violation in German Sign Language. Language, Cognition, and Neuroscience 33(9). 1107–​1127. Hosemann, Jana, Annika Herrmann, Markus Steinbach, Ina Bornkessel-​Schlesewsky, & Matthias Schlesewsky. 2013. Lexical prediction via forward models: N400 evidence from German Sign Language. Neuropsychologia 51(11). 2224–​2237. Hou, Lynn Y.-​S. 2013. Acquiring plurality in directional verbs. Sign Language & Linguistics 16(1).  31–​73. Krifka, Manfred & Hans-​Martin Gärtner (eds.). 2011. Theoretical Linguistics:  On the linguistic status of ‘agreement’ in sign languages (special issue). Berlin: De Gruyter Mouton. Kutas, Marta & Steven A. Hillyard. 1983. Event-​related brain potentials to grammatical errors and semantic anomalies. Memory & Cognition 11(5). 539–​550. Kutas, Marta, Cyma K. van Petten, & Robert Kluender. 2006. Psycholinguistics electrified II (1994–2005). In Matthew Traxler & Morton A. Gernsbacher (eds.), Handbook of psycholinguistics: 2nd edition. Amsterdam: Academic Press. Liddell, Scott K. 2000. Indicating verbs and pronouns: Pointing away from agreement. In Karen Emmorey & Harlan Lane (eds.), The signs of language revisited: An anthology to honor Ursula Bellugi and Edward Klima, 303–​320. Mahwah, NJ: Lawrence Erlbaum. Liddell, Scott K. 2011. Agreement disagreements. Theoretical Linguistics 37(3–​4). 161–​172. Lillo-​Martin, Diane. 1986. Parameter setting: Evidence from use, acquisition, and breakdown in American Sign Language. San Diego, CA: University of California PhD dissertation. Lillo-​Martin, Diane & Richard P. Meier. 2011. On the linguistic status of ‘agreement’ in sign languages. Theoretical Linguistics 37(3–​4). 95–​142. Loew, Ruth. 1984. Roles and reference in American Sign Language: A developmental perspective. Minneapolis, MN: University of Minnesota PhD dissertation. Meier, Richard P. 1982. Icons, analogues, and morphemes: The acquisition of verb agreement in American Sign Language. San Diego, CA: University of California PhD dissertation. Meier, Richard P. 1991. Language acquisition by Deaf children. American Scientist 79(1). 60–​70. Meier, Richard. P. 2002. The acquisition of verb agreement: Pointing out arguments for the linguistic status of agreement in signed languages. In Gary Morgan & Bencie Woll (eds.), Directions in sign language acquisition, 115–​141. Amsterdam: John Benjamins. Meir, Irit. 1998. Thematic structure and verb agreement in Israeli Sign Language. Jerusalem: Hebrew University of Jerusalem PhD dissertation. Molinaro, Nicola, Horacio A. Barber, & Manuel Carreiras. 2011. Grammatical agreement processing in reading: ERP findings and future directions. Cortex 47(8). 908–​930. Morgan, Gary, Isabelle Barrière, & Bencie Woll. 2006. The influence of typology and modality on the acquisition of verb agreement morphology in British Sign Language. First Language 26(1).  19–​43. Morgan, Gary, Rosalind Herman, & Bencie Woll. 2002. The development of complex verb constructions in BSL. Journal of Child Language 29. 655–​675.

137

138

Jana Hosemann Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, & Robert Lee. 2000. The syntax of American Sign Language: Functional categories and hierarchical structure. Cambridge, MA: MIT Press. Newport, Elissa L. & Richard. P Meier. 1985. Acquisition of American Sign Language. In Dan I. Slobin (ed.), The cross-​linguistic study of language acquisition, Vol. 1:  The data, 881–​893. Hillsdale, NJ: Lawrence Erlbaum. Osterhout, Lee & Linda Mobley. 1995. Event-​related brain potentials elicited by failure to agree. Journal of Memory and Language 34. 739–​773. Palancar, Enrique L. & Jean L. Léonard (eds.). 2016. Tone and inflection. New facts and new perspectives. Berlin, Boston: Mouton de Gruyter. Petitto, Laura Ann. 1987. On the autonomy of language and gesture: Evidence from the acquisition of personal pronouns in American Sign Language. Cognition 27. 1–​52. Pizzuto, Elena. 2002. The development of Italian Sign Language (LIS) in deaf preschoolers. In Gary Morgan & Bencie Woll (eds.), Directions in sign language acquisition, 77–​114. Amsterdam: John Benjamins. Quadros, Ronice de. 1997. Aspectos da sintaxe e da aquisição da Língua Brasileira de Sinais. [Syntactic aspects in the acquisition of Brazilian Sign Language]. Letras-​de-​Hoje 32(4), 125–​146. Quadros, Ronice Müller de & Diane Lillo-​Martin. 2007. Gesture and the acquisition of verb agreement in sign languages. In Heather Caunt-​Nulton, Samantha Kulatilake, & I-​hao Woo (eds.), Proceedings of the 31st Annual Boston University Conference on Language Development, 520–​531. Somerville, MA: Cascadilla Press. Quer, Josep. 2011. When agreeing to disagree is not enough: Further arguments for the linguistic status of sign language agreement. Theoretical Linguistics 4. 189–​196. Rathmann, Christian & Gaurav Mathur. 2011. A featural approach to verb agreement in signed languages. Theoretical Linguistics 37(3–​4). 197–​208. Schembri, Adam, Kearsy Cormier, & Jordan Fenlon. 2018. Indicating verbs as typologically unique constructions: Reconsidering verb ‘agreement’ in sign languages. Glossa: A Journal of General Linguistics 3(1), 89. 1–​40. Tang, Gladys, Scholastica Lam, Felix Sze, Prudence Lau, & Jafi Lee. 2008. Acquiring verb agreement in HKSL:  Optional or obligatory? In Ronice M. de Quadros (ed.), Sign languages:  Spinning and unraveling the past, present and future. Forty-​five papers and three posters from TISLR 9. Petrópolis, RJ, Brazil: Editora Arara Azul. Thompson, Robin L. 2006. Eye gaze in American Sign Language: Linguistic functions for verbs and pronouns. San Diego, CA: University of California PhD dissertation. Thompson, Robin, Karen Emmorey, & Robert Kluender. 2006. The relationship between eye gaze and verb agreement in American Sign Language: An eye-​tracking study. Natural Language & Linguistic Theory 24(2). 571–​604. Thompson, Robin L., Karen Emmorey, & Robert Kluender. 2009. Learning to look: The acquisition of eye gaze agreement during the production of ASL verbs. Bilingualism: Language and Cognition 12(4). 393–​409. Thompson, Robin L., Karen Emmorey, Robert Kluender, & Clifton Langdon. 2013. The eyes don’t point: Understanding language universals through person marking in American Sign Language. Lingua 137. 219–​229. Van den Bogaerde, Beppie & Anne Baker. 1996. Verbs in the language production of one deaf and one hearing child of deaf parents. In The 5th International Conference on Theoretical Issues on Sign Language Research. Montréal, Canada. Wilbur, Ronnie B. 2013. The point of agreement: Changing how we think about sign language, gesture, and agreement. Sign Language & Linguistics 16(2). 221–​258.

138

139

7 CLASSIFIERS Theoretical perspectives Gladys Tang, Jia Li, & Jia He

7.1  Introduction In spoken languages, the term ‘classifier’ refers to morphemes, free or bound, for noun categorization based on “some salient perceived or imputed characteristics of the entity to which an associated noun refers” (Allan 1977: 285). These characteristics are generally semantically motivated, such as humanness, animacy, physical properties, functions, and directionality. According to Aikhenvald (2000), classifier types such as noun classes, numeral classifiers, verbal classifiers, and deictic classifiers come with different morphosyntactic properties of the constructions they occur in and different degrees of grammaticalization. Whether or not elements that would be equivalent to classifiers in spoken languages exist in sign languages has been subject to much debate. When applied to sign languages, the term ‘classifier’ refers to the handshape component of signs that are commonly known as classifier predicates. Similar to spoken language classifiers, they are said to reflect some salient semantic characteristics of the corresponding noun referents (Frishberg 1975). In comparison with location and movement, which are said to be gradient, classifier handshapes are more conventionalized (although the degree of conventionalization may differ from sign language to sign language).1 For instance, to refer to the noun referent ‘vehicle’, different handshapes are observed across sign languages: a -​handshape (i.e., thumb, index, and middle finger extended) in American Sign Language (ASL), a -​handshape with palm down in Hong Kong Sign Language (HKSL), and a flat -​handshape in Beijing Sign Language and Tianjin Sign Language. Some scholars argue that the handshape component of classifier predicates may be likened to verbal classifiers in spoken languages (e.g., Supalla 1982; Aikhenvald 2003; Grinevald 2003). Zwitserlood (2012) draws parallels between sign language classifiers and verbal classifiers of spoken languages by comparing their morphological status and functions. First, both are affixal to the verb stem, a point already made by many sign language researchers (e.g., Supalla 1982; Sandler & Lillo-​Martin 2006; Tang & Gu 2007). Second, they can be associated with arguments in various grammatical relations –​ subject, object, or indirect object (Benedicto & Brentari 2004). Their anaphoric relationship with nouns potentially serves a referent tracking function. Third, there is variability 139

140

Gladys Tang, Jia Li, & Jia He

in the choice of verbal classifiers even with the same entity, which is also a general characteristic of classifiers in spoken languages (Engberg-​Pedersen 2003). Fourth, and this might be unique to sign languages, they appear only with a subset of verbs, usually in verbs of motion, location, object manipulation, and physical property descriptions (Supalla 1982; Engberg-​Pedersen 2003). While some researchers attempt to propose to account for the morphosyntactic properties of classifiers and the argument structure they are associated with by means of an agreement analysis, other researchers caution against adopting such an approach, arguing that there is a strong gestural and iconic element with these constructions –​hence they do not warrant a fully grammatical treatment. Alternative terminologies such as ‘visual imagery’ (DeMatteo 1977), ‘schematic visual representations’ for depiction (Cogill-​Koez 2000), and ‘depicting constructions/​verbs’ (Liddell 2003; Schembri 2003; Cormier et al. 2012) have been suggested. Both Liddell and Cogill-​Koez, for instance, argue that the modality of communication allows duality of representations, which means that some structured systems of visual and linguistic representations may co-​exist in classifier predicates. For Liddell, this duality involves both a verb with lexical properties and depiction, the latter consisting of the mental space mappings in which the hand is located and oriented in variable and gradient ways.2 Interestingly, this idea of dual representation involving linguistic and iconic elements of language has recently been taken up in Davidson’s (2015) proposal of using ‘role shift’ to unify accounts for verbs of quotation and classifier predicates. In the last section of the chapter, we will summarize this recent attempt of analyzing classifier predicates from a formal semantic perspective.

7.2  Typology of classifiers in sign languages One typical function of classifiers in natural languages is to classify objects according to a set of properties which are usually semantic in nature. Over the years, different categorizations of sign language classifiers have been put forward (Zwitserlood 1996, 2003; Schembri 2003), making cross-​linguistic comparison difficult at times. In this section, we will briefly review attempts to categorize the handshape components of classifier predicates. The first attempt to categorize classifier handshapes comes from Supalla (1982, 1986), who distinguishes for ASL: (i) semantic classifiers, which refer to abstract semantic categories of nouns (e.g., -​handshape for humans and -​handshape for ‘tree’); (ii) size and shape specifiers (SASSes), which denote the visual-​geometric shapes of objects (e.g., -​handshape for ‘objects with a round and flat surface’); (iii) instrument classifiers for different ways of manipulating objects by the hands as well as shapes of the tools manipulated (e.g., -​handshape for ‘holding a cylindrical object’); (iv) bodypart classifiers (e.g., -​handshape for ‘foot’); and (v)  body classifiers (e.g., the signer’s entire body). Following Supalla’s seminal study, alternative categorizations have been suggested by other researchers. Shepard-​Kegl (1985) proposed a two-​way distinction between forms that represent objects and forms that indicate the handling of objects. Yet, other scholars preferred more subclasses. Schick (1987), in a study on the acquisition of classifiers (see Zwitserlood, Chapter 8), postulated three main categories of classifier morphemes (i.e., class, handle, and SASS), and Liddell & Johnson (1987) proposed as many as seven (i.e., whole entity, surface, instrumental, depth and width, extent, perimeter-​shape, and on-​ surface morphemes). The fact that semantic (Supalla), class (Schick), and whole entity (Liddell & Johnson) classifier refers to the same classifier type is testimony to the terminological confusion. 140

141

Classifiers: theoretical perspectives

While these earlier typologies paid much attention to the visual-​ geometric characteristics or functions of the noun referents in the categorization, some researchers already suggested that they may interact with argument structure or (in)transitivity (Supalla 1986; Schick 1990). This insight became apparent in Engberg-​Pedersen’s (1993) typology (i.e., whole entity, limb, handle, and extent), in which she argues that these classificatory handshapes interact with the meaning of the predicate they combine with. For instance, whole entity handshapes occur in stative and process verbs while handle handshapes occur in process verbs, and both handshape types may refer to the theme or actor argument of the predicates. Engberg-​Pedersen’s (1993) typology was adopted by Benedicto & Brentari (2004) in their analysis of argument structure and transitivity alternations in ASL (see Section 7.5.2 below). Another example comes from Meir (2001), who distinguishes only two types of verbal classifiers in Israeli Sign Language –​theme classifiers and instrumental classifiers. She further argues that their different morphosyntactic properties are the result of specific noun incorporation processes (see Section 7.4 below). Despite a lack of consensus about the nature of categorization, Zwitserlood (2012: 161) observes that most sign languages distinguish two broad types of classifier handshapes at least. On the one hand, there are whole entity classifiers, which directly represent a noun referent based on its size, shape, and dimension. Whole entity classifiers thus cover Supalla’s semantic classifiers, static SASSes, some bodypart classifiers, and tool classifiers. On the other hand, there are handling classifiers that indirectly represent a noun referent through the way in which it is manipulated or held either by a human agent or a non-​ human entity. Handling classifiers thus cover Supalla’s instrument classifiers and some bodypart classifiers. Variation in the choice of classifier is often possible in order to reflect particular aspects of the noun referent or of how it is handled. Zwitserlood (2012: 162) also proposes to remove from the group of verbal classifiers (i) body classifiers, as they can be conveniently analyzed as means for referential shift, and (ii) tracing SASSes for their unique phonological and grammatical properties that set them apart from other types of classifiers. Currently, the general consensus is that tracing SASSes are not verbal, as they consistently function as adjectival modifiers within noun phrases. To conclude, our brief summary of the typologies of sign language classifiers confirms the various classificatory functions of the handshape component of classifier predicates in sign languages. These functions to some extent have influenced how the handshape typologies were established. In the following sections, we will summarize the theoretical approaches towards analyzing sign language classifiers and the types of predicates they occur in. Most analyses draw data from motion and locative predicates, but some also include data from transitive and causative predicates.

7.3  Verb root/​stem analysis In analyzing verbs of motion, location, and existence, Supalla (1982: 12–​13) argues that ‘classifier verbs’ in ASL are compositional and morphologically complex. Their skeletal morphological structure consists of a movement component as the basic verb root and a classifier affix. According to him, the basic verb roots may be action, contact, and stative, depending on the phonological properties of the movement component of the sign. Also, the two skeletal components may combine with other affixes in constrained, rule-​governed ways. He provides the example of an airplane classifier ( -​handshape in ASL) which has an unmarked ‘upright’ orientation affix (i.e., palm facing downward) 141

142

Gladys Tang, Jia Li, & Jia He

under normal circumstances; yet, other marked orientations like ‘side down’ or ‘front down’ are permitted. Alternatively, encoding manner of motion as in MOVE-​S TRAIGHT-​ W ITH-​S M AL L -​J U MPS (i.e., ‘hop’) is made possible by adding a movement affix to the basic action verb root (Supalla 1982: 21). Another example is the placement affix, which encodes the location of a noun referent in a stative root or the location where the path of a noun referent trespasses in an action root.3 Supalla’s influential ‘movement as verb root’ analysis has been adopted by other researchers; however, it has also attracted criticism. Engberg-​Pedersen (1993:  245), for instance, argues against treating handshape as a classifier and movement as a verb root. She argues that, at least in Danish Sign Language (DSL), movement alone fails to give the intended meaning of the predicate root consistently. An arc forward movement with an abrupt hold could entail different meanings, like (i) ‘(I) put (a book) on (the table)’, or (ii) ‘(A bird) perched up on (a branch)’. In other words, there is no one-​to-​one mapping between a specific movement and meaning of a predicate root. Additionally, the choice of handshape may be motivated by factors other than noun categorization. For instance, in DSL, both a -​handshape and a -​handshape (fingertips oriented downwards) may refer to upright humans, but the -​handshape focuses on the two legs. Instead of movement, she thus proposes to analyze handshape as the verb stem, arguing that verb meaning in DSL is determined by the interaction of the handshape morpheme and the morphemes expressed by movement, and not by movement alone. She concludes that polymorphemic verbs is a more appropriate term to capture the morphological status of the signs, which are not ‘fused’ classificatory verb stems like those observed in, for instance, Navajo because the latter are lexicalized mono-​morphemic forms. This interaction view is also shared by Slobin et  al. (2003), who call these signs polycomponential verbs consisting of two meaningful components, handshape and movement, similar to bipartite verbs in spoken languages. With respect to the morphological structure of classifier predicates, views thus vary between movement as verb root, handshape as verb stem, or handshape and movement as meaningful components. While all approaches assume that movement and handshape may be analyzed as discrete morphological units, which component functions as root of the verbal predicate is subject to debate. If one accepts the premise that the meaning of a predicate root may be boiled down to either process or state, it stands to reason that movement in classifier predicates is a stronger candidate than handshape for constituting the verbal root since it is well-​accepted that movement in classifier predicates is predicational, in the sense that its shape and intensity entail event structure and even event plurality (Wilbur 2003). In the next sections, we will discuss accounts that indeed focus on the function of handshape as classifiers which form a relationship with the nominal arguments of the predicate rather than constituting the verbal root.

7.4  Noun incorporation analysis According to Meir (2001), Israeli Sign Language (ISL) has two types of classifiers  –​ theme and instrumental classifiers –​which combine with the verb through noun incorporation (NI) to form a V+N complex. Meir’s analysis takes as a starting point Rosen’s (1989) study on lexical word formation, where it is argued that a noun root, usually a direct object NP, combines with a verb root to form a complex verb in a fashion similar to compounding.4 Rosen distinguishes two kinds of NI across spoken languages  –​ Compound NI and Classifier NI. Compound NI affects the verb’s argument structure 142

143

Classifiers: theoretical perspectives

since, following incorporation, only one argument remains because the incorporated noun argument is saturated within the verb complex. For the Samoan example in (1), where the direct object tama (‘child’) combines with the verb tausi (‘care’), the clause-​ final subject pronoun ‘he’ requires absolutive case because the verb becomes intransitive; otherwise, it receives ergative case in the transitive context. (1) Po ˀo

a:fea e tausi-​tama ai ˀoia? when TN S care-​child PRN ABS -​he ‘When does he baby-​sit?’    (Samoan, Mithun 1984: 850, cited in Rosen 1989: 310) Q

P RE D

Classifier NI, on the other hand, does not change the number of arguments because the verb’s argument is not saturated within the verb complex; the incorporated noun functions like a classifier, as it is semantically related to the direct object NP. Classifier NI gives rise to certain syntactic possibilities concerning the direct object NP, which are not possible with Compound NI. First, doubling of the overt direct object NP and the incorporated noun is possible. In (2), the direct object NP tsi:r (‘dog’) co-​occurs with the incorporated noun taskw that specifies the general category of ‘animal’. (2) ne-​hra-​taskw-​ahkw-​haˀ haˀ tsi:r. DU -​M -​animal-​pick.up-​S ERIAL EMPH dog ‘He picks up domestic animals.’ (‘He is a dog catcher.’) (Tuscarora, Williams 1976: 60, cited in Rosen 1989: 303) Alternatively, the direct object NP can be completely null or partially null; the latter occurs when some remnant elements within the NP, such as determiners, modifiers, and possessives, are stranded after incorporation of the nominal head. In the Mohawk example in (3), the modifier ‘dotted’ is stranded when the head noun ‘dress’ is incorporated into the verbal complex. (3) Kanekwarúnyu waˀ-​k-​akyaˀtawiˀtsher-​ú:ni. 3N .dotted.DIS T  PAS T -​1S G .3N -​dress-​make ‘I made a polka-​dotted dress.’ (Mohawk, Mithun 1984: 870, cited in Rosen 1989: 299) Following Rosen (1989), Meir (2001) proposes to analyze theme classifiers and instrumental classifiers in ISL as instances of Classifier NI and Compound NI, respectively. According to her, these two types of classifiers show phonological and grammatical distinctions. Phonologically, theme classifiers have specifications for handshape and possibly orientation only, whereas instrumental classifiers have handshape, location, and movement features fully specified. Therefore, theme classifiers are analyzed as affixes while instrumental classifiers are noun roots with full phonological specification. As a consequence, instrumental classifiers undergo Compound NI, a lexical process that combines a verb root and a noun root to form a root compound, whereas theme classifiers combine with the verb root through affixation, that is, they are the result of Classifier NI. Meir provides the following ISL data to support her predictions. Given the Classifier NI analysis, incorporation of theme classifiers allows doubling and stranding. (4a) is an example of doubling. The theme classifier, articulated by a flatC-​handshape (flat ) representing a wide flat object, is an affix attached to the verbal root GIVE . It co-​occurs 143

144

Gladys Tang, Jia Li, & Jia He

with the antecedent BOOK which is the direct object NP of the verb. In (4b), the modifier NEW is stranded after the theme classifier for ‘cup’, represented by a -​handshape, has been incorporated into the verbal predicate GIVE . (4) Theme classifiers as result of Classifier NI a. BOOK   IN D EX b  H E a  aG IVE-​C L :flatC1    ← ✓Doubling book  that     he  wide-​flat-​object-​he-​give-​me ‘He gave me this book.’ b. NE W   IN D EX a  aG IVE-​C L :C1           ← ✓Stranding new  that    give-​cylindrical-​object-​me ‘Cylindrical-​object-​give-​me the new.’ (lit. ‘Give me the new cup (over there).’)                                   (ISL, Meir 2001: 304–​305) In contrast, instrumental classifiers are analyzed as the result of Compound NI, and consequently, neither doubling nor stranding should be acceptable because Compound NI leads to the saturation of the instrument argument within the verb complex. Given that the verb does not introduce another instrument nominal in the syntax, the occurrence of an independent instrument NP, alongside the classifier, is very marginal, if not ungrammatical.5 In (5a), introducing the independent nominal SPOON in syntax leads to ungrammaticality, since the instrument argument SPOON , a noun root for incorporation, is already saturated in the V+N complex. Similarly, stranding of a modifier, such as NEW in (5b), is ruled out with instrumental classifiers. (5)

Instrumental classifiers as result of Compound NI a. *I   S POON  BABY   IN D EX a  1S POON -​F EED a  ← *Doubling I  spoon  baby  that   I-​spoon-​feed-​him ‘I spoon-​fed the baby with a spoon.’ (lit. ‘I fed the baby with a spoon.’) b. *STAR (distributive)  N EW   I   TELES COPE-​L OOK   ← *Stranding stars   new  I  telescope-​watch ‘I telescope-​watched the stars with the new (one).’ (lit. ‘I watched the stars with the new telescope.’)  (ISL, Meir 2001: 304, 306)

This noun incorporation analysis has been criticized, as it fails to account for data from other sign languages: German Sign Language (DGS; Glück & Pfau 1998), Sign Language of the Netherlands (NGT; Zwitserlood 2003), and ASL (Benedicto & Brentari 2004). For DGS, Glück & Pfau (1998) observe that stranding is not allowed in classifier predicates that involve a theme classifier. In (6a), for instance, it is ungrammatical to strand the numeral T HREE (‘three (flowers)’). Moreover, doubling, which Rosen (1989) argues to be optionally possible with Classifier NI in spoken languages, is in fact obligatory in DGS, as is shown in (6b); otherwise, there is no way to interpret the classifier in the predicate. Given these empirical problems, Glück & Pfau argue for an agreement account for classifiers in DGS (see Section 7.5.1.1). (6)

a. *M AN ^IN D EX 3a WOMAN ^IN D EX 3b THREE 3aGIVE 3b-​C L flower ‘The man gives three flowers to the woman.’ b. *M AN ^IN D EX 3a WOMAN ^IN D EX 3b 3aGIVE 3b-​C L flower ‘The man gives a flower to the woman.’                   (DGS, Glück & Pfau 1998: 62, 64, slightly adapted) 144

145

Classifiers: theoretical perspectives

Additional counterevidence comes from NGT. Zwitserlood (2003: 299) makes a distinction between ‘motivated signs’, a term coined by van der Kooij (2002), and classifier predicates (i.e., verbs of motion and location). The former refers to those lexical signs whose components are meaningful or iconically motivated, like the sign (CUT-​W ITH-​) SC ISSORS the handshape and movement of which mimic the shape of the object and the real-​world activity. Working within the framework of Distributed Morphology (DM), she argues that these two categories of signs are merged at different levels during the derivation. The components of motivated signs are roots and thus lexical in nature; these roots are merged until their categorial status is determined by a functional element which Zwitserlood calls ‘little x’; this may be ‘little v’ (i.e., vP) yielding a verb or ‘little n’ (i.e., nP) yielding a noun. In other words, motivated signs are root compounds (see Section 7.5.1.3 for a fuller description). As for motion and locative classifier predicates, the handshape and location morphemes are merged above ‘little v’, as they serve as agreement affixes and are thus functional in nature (see Section 7.5.1.2 for details). Zwitserlood (2003: 312–​ 315) then compares the DM approach with the NI approach, pointing out that NI fails to account for these two types of signs in NGT for the following reasons. First, spoken languages generally incorporate the patient (i.e., the direct object) into the verb complex and impose restrictions on themes or instruments in NI. However, sign languages work exactly the other way around. For sentences involving an instrument, it is usually the instrument but not the patient that is reflected by the classifier handshape that combines with the verb (e.g., ‘spoon-​feed the baby with porridge’). Second, whereas incorporation is optional in spoken languages, at least the incorporation of instrument is obligatory in motivated verb signs in NGT, as these verbs do not exist independently in the absence of an instrument handshape. In Figure 7.1, the verb ‘to water’ comes with a handshape referring to the instrument ‘watering can’. Yet, as pointed out by Zwitserlood, there is no ‘unincorporated’ variant of this verb, and thus no alternative structure involving a general verb ‘to water’ in combination with an instrument noun phrase WATERING-​C AN .

John

flower

waterV(watering can)

‘John waters the flowers with a watering can.’

Figure 7.1  Motivated sign in NGT involving an instrument as argument (NGT, Zwitserlood 2003: 312, slightly adapted; © Inge Zwitserlood, reprinted with permission)

Although Meir’s (2001) noun incorporation analysis fails to account for data from other sign languages, her idea of treating the handshape component as part of a root compound has been adopted in Zwitserlood’s root compound analysis for ‘motivated signs’. We will get back to this issue in Section 7.5.1.3. 145

146

Gladys Tang, Jia Li, & Jia He

7.5  Analyses in terms of agreement Sign languages have been argued to demonstrate different types of agreement. Of interest to the present discussion is that, besides spatial agreement (see Quer, Chapter 5), some scholars argue that classifier handshapes may also instantiate agreement. Supalla (1982) was the first to suggest that classifiers may be analyzed as agreement markers for noun arguments of a verb root, although he did not put forward any account to capture this agreement relation. Interestingly, Edmondson (1990) proposes that one should take a much broader view on ‘classification’ and go beyond cognitive or perceptual categorization, treating classifiers instead as a kind of ‘proform’ manifesting agreement with the noun referents they refer to (Edmondson 1990: 196), in the sense that a classifier morpheme sets up an anaphoric relation with the noun referent, thus serving a pronominal role. In what follows, we will summarize various approaches adopted by researchers in their formalization of the categorical status of verbal classifier in sign languages, as well as the nature of the agreement relation the classifier handshapes partake in. We will distinguish two types of approaches: (i) classifiers are embedded under the agreement (Agr) nodes for subject and object (Section 7.5.1), and (ii) classifiers are heads of functional projections and interact with argument structure (Sections 7.5.2, 7.5.3, and 7.6). Glück & Pfau (1998, 1999) and Zwitserlood (2003, 2008) attempt to elucidate the agreement relation within the framework of Distributed Morphology, thus belonging to the first group. Benedicto & Brentari (2004), Lau (2002, 2012), and Grose et al. (2007) adopt the second approach, but hold different views on the nature of the functional projections headed by the classifiers and the syntactic representations of classifier predicates, in particular, whether the representation generates an interpretation based on specific argument structures that interact with classifiers or whether it is built upon an underlying event structure comprised of sub-​event components from which the argument structure follows.

7.5.1  Analyses within the framework of Distributed Morphology 7.5.1.1  Classifiers as agreement markers Following Allan’s (1977) typology of classifiers, Glück & Pfau (1998: 63) take DGS to be a language of the predicate classifier type. Rather than using the inherent physical properties of objects for classification, as in Allan (1977), they suggest using “syntactic relations as a basis for grouping of classifiers, i.e., classifying verbs”. Specifically, they propose, with reference to Anderson (1992: 82), that classification in DGS is an instance of inflection, namely agreement. To support their argument, Glück & Pfau (1999) divide verbs in DGS into two broad types: (i) plain verbs which do not show person or number agreement, and (ii) agreement verbs with three sub-​categories: person agreement verbs, classifying verbs and spatial verbs. Both person agreement verbs and classifying verbs behave similarly, in the sense that they may encode subject and/​or object agreement. The former realize subject and object agreement via modification of the beginning and end point of path movement, while the latter “classify one argument” (Glück & Pfau 1999: 69), either subject or object, and realize subject or object agreement via handshape change. They further assume that classifiers are “assigned in a certain phrasal projection, and the inherent properties of arguments (subject/​object) are the relevant features 146

147

Classifiers: theoretical perspectives

for triggering classification” (Glück & Pfau 1998: 65). The third type of verbs –​spatial verbs –​also realizes agreement by means of modifying the beginning and/​or end point of the path movement, but agreement is with a locative argument. In the following discussion, we will only focus on agreement verbs and classifier verbs in DGS, which they claim to behave similarly syntactically. Empirical evidence for this claim comes from left dislocation, as both types of verbs license a pro after dislocation of a subject or object, while use of a resumptive pronoun is optional. Building on earlier work on ASL by Lillo-​Martin (1991), Glück & Pfau (1998) first observe that plain and agreement verbs in DGS show different behaviors when dislocating an embedded subject or object to the left. In (7a), which includes the plain verb BU Y in the embedded clause, the object BOOK is left-​dislocated to a topic position from inside the embedded clause, and the gap needs to be filled by a resumptive pronoun I NDE X 3 (‘it’) otherwise, the sentence is ungrammatical. In contrast, in (7b) the agreement verb SHOW in the embedded clause licenses a null argument pro for the left-​dislocated element (i.e., the object woman), and use of the resumptive pronoun INDEX 3a (‘her’) is optional. (7)

Person agreement licenses pro in a left dislocation structure    

       

a.

B OOK^IN D EX 3, CH ILD TH IN K , MAN

  top

   

           top

b.

WOMAN^INDEX 3b, CHILD THINK , MAN^INDEX 3a

*pro/​I NDEX 3 BUY ‘This booki, the child thinks, the man buys iti.’ pro/​I NDEX 3b BOOK 3aSHOW 3b ‘This womani, the child thinks, the man shows (heri) the book.’             (DGS, Glück & Pfau 1998: 69, slightly adapted)

Classifier verbs behave like person agreement verbs in such contexts. As shown in (8a) and (8b), a classifier verb in the embedded clause licenses a pro and renders the resumptive pronoun IND EX 3 (‘it’) optional –​similar to what we observe in (7b). The classifier on the verb can be an entity classifier (related to the left-​dislocated subject dog), as in (8a), or a handling classifier (related to the object G LASS ), as in (8b). Since there is no person agreement marker on the embedded predicate, Glück & Pfau argue that it can only be the classifier that licenses the null argument and thus conclude that classifiers in DGS are agreement markers to the extent that classifier morphology licenses pro. (8)

Classifiers as agreement markers for subject (a) or object (b)  

a.

            top DOG a^IN D EX 3a, CH ILD TH IN K ,

pro/​I N DEX 3a STREET center centerGO-​C L a ‘The dogi, the child thinks, (iti) is crossing the street.’

      

b.

        top

GLASS a^IN D EX 3a, CH ILD TH IN K , MAN

pro/​I NDEX 3a TABLE center centerTAKE-​C L a ‘The glassi, the child thinks, the man takes (iti) off the table.’         (DGS, Glück & Pfau 1998: 70, 71, slightly adapted)

To account for classifiers participating in subject/​object agreement, Glück & Pfau (1999) adopt Distributed Morphology (DM, Halle & Marantz 1993). First, they propose that the syntactic structure of DGS does not contain any agreement projections. The verb is assumed to raise and adjoin to functional heads, first to Asp then to Tns, which contain 147

148

Gladys Tang, Jia Li, & Jia He

morphosyntactic and semantic features only. At Morphological Structure, the interface between syntax and Phonological Form (PF), Agr nodes are adjoined to various heads –​ AgrS to Tns, AgrDO to V, and AgrIO to Asp (see Figure 7.2), and the relevant agreement features are copied onto the agreement heads. At PF, phonological features are assigned to these morphosyntactic feature bundles via Vocabulary Insertion.

Figure 7.2  Adjunction of agreement nodes to functional heads after verb movement (Glück & Pfau 1999: 73)

Glück & Pfau further claim that person agreement involves ‘concrete’ morphemes (i.e., phonological spell-​out is more or less invariable and involves points in space), while classifier agreement morphemes are ‘abstract’, and phonological readjustment rules are necessary for the specific lexical entries to be spelled out at PF (i.e., phonological spell-​out is variable). Therefore, at the point of Vocabulary Insertion, person (and number) agreement markers are inserted into the respective agreement nodes, while readjustment rules are responsible for changing the handshape of the stem for agreement purposes, in a fashion similar to a stem-​internal change like sing–​sang–​sung in English. While Glück & Pfau’s agreement analysis based on DM opened up a new line of inquiry into the categorical status of sign language classifiers, there are still issues that need to be resolved. The authors developed their analysis mainly based on the direct object of ditransitive classifier predicates, but did not provide a clear picture about what happens in the case of intransitive or simple transitive classifier predicates, where the classic agreement mechanism might show up in addition to classifier agreement (Geraci & Quer 2014). It remains unclear what motivates the grammar to invoke person agreement verbs like xV IS IT y only, classifier verbs like BREAK -​C L stick (‘break a stick by the hands’) with classifier agreement features, or verbs with both person and classifier agreement such as xGIV E y-​C L glass (‘x gives a glass to y’). Future research also needs to explain how agreement takes place with features pertaining to classifiers at MS before readjustment rules apply via Vocabulary Insertion at PF. As said, Glück and Pfau’s analysis rests upon the assumption that both person agreement verbs and classifier verbs behave similarly in terms of the licensing of pro in left dislocation constructions. However, licensing of pro is neither a necessary nor a sufficient condition for agreement marking. Zwitserlood (2003) provides two arguments to show that there is no one-​to-​one relation between agreement and null arguments. First, from a cross-​linguistic perspective, it is clear that null arguments are not necessarily licensed by (rich) agreement. On the one hand, languages like German and French have rather rich agreement systems, but are described as non-​ pro-​drop languages. On the other hand, languages such as Chinese, Japanese, and Korean

148

149

Classifiers: theoretical perspectives

do not have verb agreement at all, but still allow pro drop even more freely than those with rich agreement systems (Huang 1984, 1995; cited in Zwitserlood 2003). The second argument comes from Lillo-​Martin (1986, 1991), whose analysis is adopted by Glück & Pfau. Lillo-​Martin shows that in ASL, null arguments are allowed in the absence of agreement morphology, as the examples in (9) show (both SUNBATHE and EAT-​U P are plain verbs). She argues that in these cases, the null argument is licensed by topic. Therefore, Glück & Pfau’s (1998) agreement analysis may be too strong in light of the alternative strategies for licensing null arguments attested in natural languages. In sum, agreement morphology is a sufficient but not a necessary condition of licensing a pro.       hn

(9)

a.

aJ OHN aFLY b bCALIFORN IA

LAS T-​W EEK . ENJOY SUNBATHE [dur].

‘John flew to California last week. (He’s) enjoying a lot of sunbathing.’ b. A: Did you eat my candy? B: YE S , EAT-​U P ‘Yes, (I) ate (it) up.’    (ASL, Lillo-​Martin 1986: 421)

7.5.1.2  Gender agreement Following up on Glück & Pfau’s (1998, 1999) proposal that sign language classifiers function as agreement markers, Zwitserlood (2003, 2008) further suggests that, at least in NGT, classifier morphemes, which she takes to be similar to verbal classifiers in spoken languages, function as agreement markers in a way comparable to noun class systems, due to their morphosyntactic and semantic characteristics. Just like noun class agreement markers, they (i) are obligatorily affixed to verbs, (ii) indicate an argument of the verb, (iii) assign a noun class to nouns based on their semantic characteristics, and (iv) serve the function of reference tracking. For spoken languages, Corbett (1991) observes that noun class and gender, while both being based on semantics, are marginally different with respect to how the classes are defined: gender is sex-​based while noun class has different semantic bases, like animacy, shape, and humanness. Based on these observations, and focusing on verbs of existence, location, and movement (VELMs), Zwitserlood argues that the inherent semantic characteristics, as displayed by NGT classifiers, are ‘gender agreement features’ (i.e., morphosyntactic φ-​features) (Zwitserlood 2003:  187, footnote 4; Zwitserlood & van Gijn 2006). These features can be grouped according to whether they denote (i)  animacy ([+animate]) and leggedness ([+legged]); (ii) size and shape ([+straight], [+small], [+flat], [+volume]); and (iii) the amount of control exercised by a manipulator ([+control]) (Zwitserlood 2003: 192).6 This feature-​based approach to NGT classifier agreement entails morphemic units below the level of the sign, and combinations of these units are rule-​governed. Taken together, at least handshapes and locations in classifier verbs and locations in agreement verbs display the function of agreement marking. Working also within the DM framework, Zwitserlood (2003, 2008) proposes a mapping procedure between the order of merger and the realization of agreement. Here, we sketch the derivation she suggests for the NGT classifier predicate LEGGED. ENTITY-​ M OV E .UP (Figure 7.3).

149

150

Gladys Tang, Jia Li, & Jia He

LEGGED.ENTITY-MOVE.UP

Figure 7.3  The classifier predicate L E GGE D. E NT ITY-​M OV E.U P in NGT (Zwitserlood 2008: 254; © Signum Verlag, reprinted with permission)

Zwitserlood proposes that the abstract verb root [+DIR] merges with an obligatory internal argument (i.e., the Theme) and two more internal arguments (e.g., Source and Goal) to form a series of root phrases (ÖP). When the root phrases merge with a category node vP, a cyclic domain is formed, and Spell-​out takes place. Further derivations occurring at Morphological Structure allow the Source and Goal arguments to copy their respective [+loc] features onto two AgrOOs (agreement with oblique object) to instantiate locus agreement (with loci x and y), and the Theme argument to copy the [+legged] and [+loc] features to AgrS, as shown in Figure 7.4. Apparently, Zwitserlood’s account leaves aside formal agreement based on person and focuses primarily on agreement between the noun arguments and the classifiers based on gender features, be they for subject or object agreement. To account for transitive VELMs, she further assumes a voiceP to project an Agent argument above little vP, although she rejects the idea that it is the handling classifier that heads the voiceP (Zwitserlood 2003: 210).

Figure 7.4  Analysis of classifier predicate L E GGE D.EN TI TY-​M OV E.U P (see Figure 7.3) in NGT, based on Distributed Morphology (Zwitserlood, 2008: 268, slightly adapted)

150

151

Classifiers: theoretical perspectives

7.5.1.3  Root compounds Another insight from Zwitserlood (2003, 2008) is that DM offers a unified account for classifier predicates and so-​called ‘frozen signs’ (i.e., semantically motivated signs).7 Frozen signs in NGT are lexical nouns and verbs which incorporate in their formation meaningful sub-​lexical units like classifier handshapes, movement, and place of articulation. For example, the same -​handshape systematically appears in signs like HARE, TO-​WEIGH/​WEIGHT/​KILO, and MIRROR. This overlap suggests that in these signs, the -​ handshape is a classifier handshape representing ‘flat and wide entities’, such as ‘ears of a hare’, ‘scale of a pair of scales’, and ‘a mirror’ (Zwitserlood 2003: 285). Another example is the frozen sign ESCALATOR (see Figure  7.5), whose underlying structure has a lot in common with the motion predicate LEGGED.ENTITY-​MOVE.UP depicted in Figure 7.3 above.

ESCALATOR

Figure 7.5  The ‘frozen sign’ E SCA L AT O R in NGT (Zwitserlood 2008: 254; © Signum Verlag, reprinted with permission)

In Figure 7.6, we sketch the derivation that Zwitserlood suggests for the frozen sign Structurally, the root [+DIR] merges with tiers of root phrases, first one for the internal Theme argument bearing [+legged] and [+loc] features, then with others carrying the locus features ([+high] and [+low]). As Figure 7.6 shows, the series of root phrases merges with a category node little n to form an nP, and the domain is shipped off to PF and LF (the Conceptual Interface). These series of root phrases thus reflect the internal structure of a root compound. E SC AL AT OR .

Figure 7.6  Derivation of

E SCA L AT O R

in NGT (Zwitserlood, 2008: 269, slightly adapted)

151

152

Gladys Tang, Jia Li, & Jia He

Zwitserlood points out that classifier predicates differ from root compounds only in having just one root (i.e., the movement), whereas each of the meaningful components of a root compound can be a root. She attributes this to “the different points in the derivation at which the components are merged” (Zwitserlood 2003:  322). In classifier predicates, the agreement nodes that are inserted (and spelled out by handshape and place of articulation features) are merged above the category node (i.e., little v), and thus, they function as inflectional affixes. In contrast, in root compounds, handshape and place of articulation features are inserted into the root phrases below the category node (e.g., little v, little n, etc.); hence they all function as lexical elements. Thus, the formation of classifier predicates is a grammatical (inflectional) process while that of root compounds is a lexical (compounding) process. Taken together, we have seen that some researchers working on NGT and DGS adopt a formal subject/​object agreement analysis within the framework of Distributed Morphology to account for the behavior of classifiers in these sign languages. Specifically, they assume that spatial and morphosyntactic features of arguments from the root phrases below the vP are copied onto various agreement heads at Morphological Structure and that classifier handshapes spell out these heads at PF subsequently. In other words, classifiers do not head their own functional projections. Rather, classifier handshapes spell out the (subject or object) agreement nodes onto which the (gender) agreement features have been copied from the root phrases below. In the next section, we will discuss other approaches which hypothesize that classifiers are heads of functional projections in syntax.

7.5.2  Agreement analysis and argument structure 7.5.2.1  Projection of a verbal classifier phrase Inspired by Kegl’s (1990) analysis of causatives in ASL, Lau (2002) investigated HKSL and identified a causative-​ inchoative alternation in the language. Similar to what Kegl described for ASL, the causative predicate involves a handling classifier and the unaccusative predicate a whole entity classifier.8 In her analysis, Lau assumes that verbal classifiers in sign languages are verbal affixes (cf. Supalla 1986). They are limited in types (i.e., handling, whole entity, extension) and serve primarily a reference tracking function of the noun argument. Following Brentari & Benedicto (1999), she posits that classifiers head their own functional projection –​a verbal classifier phrase (VCLP) –​which selects either a vP or VP depending on the argument structure of the predicate (see Figure 7.7). The handle classifier affix occupies the head of the VCLP, which has an [agentive] feature with a [+A] value which triggers movement of the verb root (‘bounce’ in Figure  7.7a) from the lower VP to the higher v for checking the [+cause] feature. The affix then adjoins to the head of VCLP, and a verbal complex results. The [+A] feature also triggers movement of the external argument (‘a boy’) from the specifier of vP to the specifier to VCLP. Semantic and morphosyntactic features are checked under Spec-​head agreement between the DP and the verbal complex under VCL. Note that Lau assumes that subsequent movement to higher positions in the structure, such as topicalization, will result in an OSV or SOV order. Alternatively, when VCLP is headed by a whole entity classifier and contains a [–​A] value, the verb root moves from the VP directly to adjoin to the head of VCLP (see Figure 7.7b).

152

153

Classifiers: theoretical perspectives

a. Transitive-causative: bounce+CLhandle

b. Intransitive-unaccusative: bounce+CLw/e

Figure 7.7  Analysis of transitive-​causative (a) and intransitive-​unaccusative (b) predicates in HKSL (adapted from Lau 2002: 135, 138)

To conclude, while the proposal of classifiers being heads of dedicated functional projections is appealing, more empirical evidence is needed to justify positing a VCLP for the classifier morphemes, especially the grammatical function of this maximal projection, as the traditional view that classifiers have the function of classifying objects does not seem to justify a functional projection. In the following section, we further expand on the hypothesis that classifiers are heads of functional projections by summarizing the proposal by Benedicto & Brentari (2004).

7.5.2.2  Classifiers as heads of functional projections When classifiers are claimed to be associated with arguments of the predicates they combine with, one logical question that emerges is whether there is a correlation between classifier types and valency of predicates. Benedicto & Brentari’s (2004) seminal paper asserts (i) that classifiers (handling, whole entity, or bodypart) are heads of functional projections and determine the status of the argument as either internal or (non-​)agentive external, and (ii) that argument structure alternations are triggered by the morphosyntactic properties of classifiers projecting as heads, thus determining the syntactic representations. Additionally, the association between the classifier and the nominal antecedent is achieved via Spec-​head agreement within appropriate functional projections. Following Supalla (1982), they assume that the verbal root is phonetically realized by movement (i.e., ‘movement as root’). The root merges with one argument but the status of this argument as internal or external is determined by the syntactic properties of the functional head (i.e., the classifier), in line with Borer’s (2005) proposal that verbs only determine the number of arguments of the predicate, while their status is determined by functional projections above VP (Benedicto & Brentari 2004: 748). Therefore, classifiers in ASL pattern syntactically in two groups. One group is associated with internal arguments (i.e., the direct object of transitives or the subject of unaccusatives), the other with external arguments (i.e., the subject of transitives and unergatives). Specifically, handling classifiers select a transitive predicate because they encode both the external and internal arguments. Whole entity classifiers are associated with internal arguments of intransitive unaccusative predicates, and bodypart classifiers (BPCL) with external arguments

153

154

Gladys Tang, Jia Li, & Jia He

of intransitive unergative predicates. In their analysis, two functional projections are posited (f1P and f2P) which are headed by a phonologically null classifier morpheme (see Figure 7.8). a. f1+f2 = handling

b. f2 = whole entity, extension

c. f1 = limb/BPCL

Figure 7.8  Types of classifiers and their syntactic representations (Benedicto & Brentari 2004: 767, 769)

Handling classifiers, being transitive in nature, project two functional projections, where the specifiers of f2P and f1P are the sites for the internal and external arguments respectively (Figure  7.8a). Benedicto & Brentari claim that this analysis is in line with Kegl’s (1990:  157) analysis, in which handling classifiers involve a ‘causer’ argument (note that Benedicto & Brentari assume that the handling classifier is ‘agentive’). Morphophonologically, handling classifiers are complex because the handpart represents the external argument and the selected fingers the internal argument, both combined to represent a transitive predicate. Correspondingly, whole entity classifiers and extension classifiers, which select an internal argument for intransitive predicates, project an f2P only (Figure 7.8b) and are spelled out by the selected fingers node. Lastly, bodypart classifiers, which select an external argument, project an f1P structure only (Figure 7.8c), and the articulator node serves as the morphophonological template for spell-​out. In other 154

155

Classifiers: theoretical perspectives

words, classifiers at either f20 or f10 are in a structural agreement relation with the DP that lands in the respective specifier of the functional projections. This Spec-​head agreement enables the noun argument and the classifier to share morphological and syntactic properties. Note that Benedicto & Brentari (2004) argue that this agreement relationship is not based purely on φ-​features, as assumed by Glück & Pfau (1999) and Zwitserlood (2003), but also on the argumental and syntactic properties that the classifiers entail. Apparently, Benedicto & Brentari treat the array of inherent semantic features of the classifiers as lexical rather than functional, contra the proposal of ‘gender features’ put forward in Zwitserlood (2003). With this syntactic machinery, Benedicto & Brentari account for two types of argument structure alternations in ASL: (i) the transitive-​intransitive alternation, and (ii) the unergative-​unaccusative alternation. Crucially, the verb root is held constant for both types of alternation, and the alternation results from classifier selection. (10a) and (10b) show the transitive-​intransitive alternation for the root MOVE , where the handling classifier yields a transitive predicate (10a) and the whole entity classifier an intransitive predicate (10b). The unergative-​unaccusative alternation is demonstrated by the verb root BOW in (11a) and (11b). The bodypart classifier yields an unergative predicate (11a), the whole entity classifier an unaccusative predicate (11b). (10)

a. Handling classifier, transitive INDE X   BOOK    C+MOVE s/​ he   BOOK    OBJ_​G RAB hdln+move_​vertical-​to-​horizontal ‘S/​he took the (standing) book and laid it down on its side.’ b. Whole entity classifier, intransitive B OOK

B+MOVE

book 2D_​flat_​objw/​e+move_​vertical-​to-​horizontal ‘The (standing) book fell down on its side.’                           (ASL, Benedicto & Brentari 2004: 769) (11)

a. Bodypart classifier, unergative AC T OR

S +BOW

actor headlimb/​BPCL+ bow ‘The actor bowed.’ b. Whole entity classifier, unaccusative AC T OR

1+BOW

actor upright_​beingw/​e+bow ‘The actor bowed.’  (ASL, Benedicto & Brentari 2004: 760) In order to test the argumental properties of ASL classifiers (selection of either an external or internal argument or both), Benedicto & Brentari invoke syntactic diagnostics, two of which we will sketch here:  NOTHING for testing internal arguments (as it can only combine with internal arguments) and the negative imperative FINISH for testing external arguments (as it can only combine with external arguments). As predicted, the bodypart classifier fails the NOTHING test (12a) but passes the FINISH test (12b), whereas the whole entity classifier fails the FINISH test (12c), suggesting that the bodypart classifier selects an external argument, whereas the whole entity classifier selects an internal argument. 155

156

Gladys Tang, Jia Li, & Jia He

(12) a. *AC TOR   S +BOW   actor  headlimb/​BPCL+bow  #’None of the actors bowed.’

N OTHING        *[limb/​ BPCL] N EG -​nothing

  brow raise

b.

RE MEMBER  

S +BOW   remember   headlimb/​BPCL+bow  ‘Remember, stop bowing!’

FINISH !     [limb/​ BPCL] STOP_​I MPER

brow raise

c.

*REMEMBER   1+BOW   FINISH !  *[whole entity CL] remember  upright_​beingw/​e+bow  STOP_​I MPER ‘Remember, stop bowing!’   (ASL, Benedicto & Brentari 2004: 760, 762)

Benedicto & Brentari’s hypothesis that classifier types determine argument structure and syntactic properties of arguments has been subject to much debate. Indeed, subsequent extension of the analysis to Catalan Sign Language (LSC) and Argentinian Sign Language (LSA), yielded rather mixed results, especially for the whole entity and bodypart classifiers (Benedicto et  al. 2007). Unexpected compatibility was observed between the whole entity classifier and the agent-​denoting adverbial VOLENT (‘on purpose’) in LSC, shown in (13a), as well as between the bodypart classifier and the [distributive] morpheme (which is a test for internal arguments) in LSA, shown in (13b), contrary to Benedicto & Brentari’s (2004) predictions. (13) a.

UN   H OME  

VOLEN T   1+PAS SAR one  man  on_​purpose  personw/​e+move_​from_​left_​to_​right ‘A man passed on purpose (in front of the window).’                               (LSC, Benedicto et al. 2007: 1210) b. M UJER   BAILE   TU T ú  B-​B +ES TAR [distr] woman dance   tutu  feetBPCL+be_​loc[ext] ‘The ballerinas were standing on tiptoe in a row.’                               (LSA, Benedicto et al. 2007: 1212)

Another concern is how to analyze those handling classifiers that represent an instrument argument. Typical handling classifiers encode how the hand holds or manipulates an object; however, when it comes to encoding an instrument, Benedicto & Brentari (2004) recognize the need to distinguish between handling instrumental classifiers and whole entity instrumental classifiers. The former encodes how the hand holds the instrument, whereas the latter represents the instrument itself and how it acts on the object. Despite these differences, both types still share some core properties of ‘handling classifiers’ and enter the transitive-​intransitive alternation, as shown in (14a) and (14b). Note that use of the adverb W ILLIN G is another test invoked by Brentari & Benedicto; it can only be used with external arguments, and therefore, its use would lead to ungrammaticality in (14b). (14)

a. Handling instrumental classifier, transitive S+S AW_​W OOD (√WILLING ) obj/​instr_​grabhdlg-​i +cut_​wood voluntarily ‘S/​he is cutting wood (voluntarily).’ 156

157

Classifiers: theoretical perspectives

b. Whole entity instrumental classifier, unaccusative B + SAW_​W OOD (*WILLING ) 2D_​flat_​objw/​e-​i +cut_​wood voluntarily ‘The wood is cut (*voluntarily).’     (ASL, Benedicto & Brentari 2004: 774) To conclude this section, the seminal paper by Benedicto & Brentari (2004) does not assume an agreement analysis via agreement nodes for subject and object; rather, they posit two functional projections whose heads have morphological and syntactic properties associated with either external or internal arguments. Consequently, the DP that lands in the specifier of these projections will exhibit syntactic behavior according to these argumental properties. In the following sections, we will draw on some cross-​linguistic data from HKSL, Tianjin Sign Language (TJSL), and NGT which suggest that the correlation between classifier type and argument structure is not as clear-​cut as Benedicto & Brentari assume.

7.5.2.3  Transitive-​transitive alternation based on instrumental classifiers De Lint (2010, 2018) attempts to verify the correlation between classifier types and argument structure in ASL and NGT using experimental methods. In her 2010 study on ASL, she failed to replicate Benedicto & Brentari’s (2004) results using the [distr] morpheme and NOT HING tests for internal arguments. She claims that the syntactic tests, as put forward by Benedicto & Brentari, “seem to be restricted to a couple of idiosyncratic pairs, rather than forming part of a largely productive process”. (de Lint 2010: 25). Nevertheless, in her 2018 study based on NGT data, de Lint confirms Benedicto & Brentari’s proposal that classifiers determine the argument structure of the predicate. Crucially, however, she finds that only the transitive-​intransitive alternation is consistently observed, whereas the informants’ performance on the unergative-​unaccusative alternation shows various degrees of variation. An important contribution from de Lint’s studies is her proposal to add a transitive-​ transitive alternation to the two patterns identified by Benedicto & Brentari (2004). According to her, it is necessary to make a distinction between MOVE -​type verbs and S AW -​ type verbs. While the former type consists of causative verbs and enters the transitive-​ intransitive alternation, as discussed previously (see (10ab) above), the S AW -​type, as shown in (14) above, allows either an agent or an instrument to be the external argument as part of the theta grid, and consequently, this type may take part in a transitive-​ transitive alternation.9 Following Reinhart (2000, 2002), she calls verbs like ‘saw’, ‘cut’, ‘peel’, and ‘screw’ manner verbs, as in the English examples in (15). (15)

a. Max peeled the apple (with the knife). b. The knife peeled the apple.

(de Lint 2010: 28)

Other properties of manner verbs are that they neither allow a cause nor a reduced unaccusative, as shown in (16a) and (16b). (16)

a. *The heat peeled the apple. b. *The apple peeled.

(de Lint 2010: 29) 157

158

Gladys Tang, Jia Li, & Jia He

Based on these observations in spoken language, de Lint argues that in ASL and NGT, the SAW -​type verbs set themselves apart from the MOVE -​type because the former type inherently includes a reference to a specific instrument, without which the event could not happen. In other words, “[…] manner verbs select instrument as part of their semantic structure” (de Lint 2018: 11), and the instrument may surface as an external argument, as shown in (17), where the classifier -​handshape representing a 2-​dimensional object (i.e., a saw) combines with a verbal stem SAW _​W OOD . (17)

Whole entity classifier, non-​agentive B ( )+SAW_​W OOD C L (W E -​i):2D_​flat_​obj+S AW_​W OOD ‘The saw cut the wood.’ (‘The saw sawed the wood.’)      

(de Lint 2010: 33)

She follows Reinhart’s (2000, 2002) proposal that if the predicate involves both an agent and an instrument, it is the agent that is realized as an external argument. An instrument becomes the external argument only when no agent is realized. De Lint further predicts that whole entity instrumental classifiers can appear in transitive constructions involving S AW -​type manner verbs and in which the agent is semantically/​implicitly realized. Clearly, this proposal differs from Benedicto & Brentari’s analysis, in which classifier predicates involving a whole entity instrumental classifier must be intransitive because they involve a non-​agentive internal argument only. According to de Lint, even a passive analysis fails to salvage this construction because the phonological (i.e., being directed towards the signer’s body) and morphological (i.e., usually agreement verbs) properties of passives do not apply to predicates involving a whole entity instrumental classifier (de Lint 2010: 30). While de Lint’s analysis of manner verbs is insightful, the empirical data provided in the two studies fall short of a clear picture. Note that the data from ASL is based on picture or sign selection, whereas the examples from NGT are based on elicited production. As said, the findings for the transitive-​intransitive alternation yield a consistent correlation between classifiers and argument structure in both ASL and NGT, namely handling classifiers for transitive predicates and whole entity classifiers for intransitive predicates with an internal argument. However, the stimuli for testing the unergative-​unaccusative alternation yielded variable results. De Lint mentions that this might be due to methodological limitations, although she also suggests that the split intransitivity may not be empirically justified and that “it could also be that verbs of type 1 (unergative verbs) in NGT do not systematically exhibit categorical associations between handshape and argument structure” (de Lint 2018: 31). As for the transitive-​transitive alternation, both the ASL and NGT data show a preference for the handling instrumental classifier, and this preference spills over to some stimuli for the non-​agent contexts, such as ‘spoon-​feed’ and ‘sweep’, which show variation between handling instrumental and whole entity instrumental classifiers (see de Lint (2018: 25–​26), Table 4). According to her, this variation may be due to lexicalization, and she thus calls for more research to further verify the correlation.

7.5.2.4  Cross-​linguistic variation: data from HKSL and TJSL Cross-​linguistic data from HKSL and TJSL also show that Benedicto & Brentari’s predictions are too strong. Typological variation seems to be at work in these two Asian sign languages. In particular, bodypart classifiers, some types of whole entity classifiers 158

159

Classifiers: theoretical perspectives

(formerly known as semantic classifiers), whole entity instrumental classifiers, and extension classifiers show differences in their syntactic behavior when compared with ASL (Tang & Gu 2007; He & Tang 2013, 2018a, 2018b), in the sense that the same type of classifier may enter into different argument structures.10 First, some bodypart classifiers in both HKSL and TJSL (e.g., inverted -​handshape referring to legs of a human) can occur in intransitive unergative predicates, suggesting that they are associated with an agentive external argument –​in line with Benedicto & Brentari’s observation. In (18a), the predicate is atelic and denotes an activity of ‘limping in a certain direction’. The 2-​legged bodypart classifier ( ) refers to the external agentive argument GIRL that initiates and participates in the ‘limping’ activity. (18)

a. Bodypart classifier referring to external argument GIRL i  limp_​around+CL bodypart-​i ‘The girls limps around.’

(HKSL)

b. Bodypart classifier referring to external and internal argument (HKSL) GIRL i  stand_​on_​two_​legsa+CL bodypart-​i BOY j  bwalk_​towarda+CL bodypart-​j //​ CL bodypart -​i  LEFT^LEG kick+CL bodypart-​j //​ CL bodypart-​i, falla+CL bodypart-​i. ‘A girl is standing on two legs here; a boy walks towards her and kicks one of her legs; the girl falls down.’ Example (18b) shows that the same bodypart classifier may refer to both external and internal arguments in either intransitive or transitive predicates. For the intransitive classifier predicate ‘stand_​on_​two_​legs’, the external argument ‘girl’ is represented and indexed as C L bodypart-​i. This classifier referring to ‘the girl’ then enters into a series of simultaneous constructions,11 serving as the internal argument of a directional motion predicate (‘The boy walks towards the girl’), a transitive predicate (‘The boy kicks the girl on one of her legs’), and an unaccusative predicate (‘The girl falls down’). At the same time, the external argument of the transitive predicate is also presented by the same bodypart classifier glossed and indexed as CL bodypart-​j. In sum, examples (18a) and (18b) show that bodypart classifiers in HKSL can also occur as an internal argument of transitive predicates or of intransitive unaccusative predicates, contrary to Benedicto & Brentari’s and Grose et al.’s predictions (see Section 7.6). Another example to confirm that bodypart classifiers may refer to internal arguments comes from independent intransitive unaccusatives in TJSL. As (19) shows, the fact that the rabbit is sleeping excludes the possibility that ‘The rabbit moves its ears to the left side of its head’. This sentence has only one interpretation, namely ‘The rabbit’s ears fall to the left (side of its head)’, giving rise to a structure in which the bodypart classifier refers to the internal argument of the intransitive construction. (19)

Bodypart classifier referring to internal argument          RABB IT SLEEP , ear_​upward(on head)+CL bodypart ear_​fall(on left side of the head)+CL bodypart ‘The rabbit is sleeping, its ears fall to the left side of its head.’

  (TJSL)

Next, we turn to whole entity classifiers. We first focus on those whole entity classifiers that are traditionally referred to as semantic classifiers. In HKSL and TJSL, when they 159

160

Gladys Tang, Jia Li, & Jia He

have a [+human] feature, these classifiers can be associated with either external or internal arguments. For instance, the upright -​handshape classifier, which refers to a human in both sign languages, is associated with the external argument MAN of the unergative predicate ‘awalk_​to_​and_​frob+CL w/​e’ in (20a), with both the external and internal arguments of the (simultaneous) transitive predicate ‘push_​with_​his_​back+CL w/​e //​ CL w/​e’ in (20b),12 and with the internal argument of the unaccusative predicate ‘fall’ in (20c). (20) a. Whole entity classifier, external argument of unergative predicate (HKSL) PARK   MAN i  awalk_​to_​and_​frob+CL w/​e-​i ‘A man walks to and fro in the park.’ b. Whole entity classifier, external and internal arguments of transitive predicate WOMAN i  be_​locateda+CL w/​e-​i  MAN j  push_​with_​his_​backa+CL w/​e-​j //​ CL w/​e-​i (HKSL) ‘A man pushes a woman with his back.’ c. Whole entity classifier, internal argument of unaccusative predicate      (HKSL) WOM AN i falla+CL w/​e-​i ‘A woman falls down.’ The second type of whole entity classifiers involve an instrument –​whole entity instrumental classifier. Data from HKSL and TJSL show that this type can also be associated with transitive predicates. Using non-​manuals signaling an agent as a diagnostic, He & Tang (2018b) show that these predicates may or may not be agentive. Example (21), the predicate of which is illustrated in Figure  7.9, involves both an agent WOMAN and an instrument NAIL^FILE as potential antecedents for the classifiers. In the subsequent transitive predicate ‘file’, a whole entity instrumental classifier is selected instead, and the agent is implicitly realized through the non-​manual adverbial ‘effortlessly’ that scopes over the predicate. Following Supalla (1982), the authors argue that the signer’s body systematically represents the agent argument of a transitive predicate involving an instrument. (21)

Whole entity instrumental classifier, transitive and agentive         

    (HKSL)

  effortlessly

WOM AN NAIL i ROU G H , NAIL^FILE j file+CL w/​e-​j //​ CL bodypart-​i ‘The woman’s nail is rough; (she) files her nail with a nail file effortlessly.’

Figure 7.9  HKSL two-​handed classifier predicate ‘file+C L w/​e //​ C L bodypart’ in (21), accompanied by non-​manual  marker

160

161

Classifiers: theoretical perspectives

Further evidence from TJSL confirms that instrumental and non-​instrumental whole entity classifiers can occur in transitive predicates. In (22a), as in (21), the non-​manuals signal the involvement of an agent in the drilling event in which a whole entity instrumental classifier is adopted. In (22b), the first-​person pronoun ‘I’, which is an external argument, co-​occurs with a whole entity classifier that acts as an internal theme argument. (22) a. Whole entity instrumental classifier, transitive and agentive            (TJSL)                         GROUND a-​i 

    with effort

AU G ER j drilla+CL w/​e-​j

//​ CL w/​e-​i ‘(I use) the auger to drill the ground carefully.’ b. Whole entity classifier, transitive predicate            C OM P U TER i  IX-​1  open+CL w/​e-​i,  TYPE++ ‘I open the computer and type.’

  (TJSL)

Next, we examine the status of handling classifiers as associating with an external and an internal argument. He & Tang (2018a) confirm that handling classifier predicates in HKSL and TJSL are transitive and that some may enter into transitive-​intransitive alternation just like the M OVE -​type verbs in NGT and ASL (see (10)). However, at least in HKSL and TJSL, a distinction has to be made between the commonly observed agent-​oriented handling classifiers, which express the manipulation of an object by the hand (e.g., the -​handshape for handling a S AW ), and non-​agent-​oriented handling classifiers, in which the hand represents an inanimate object that manipulates another object.13 The latter type is illustrated in (23a), where a crane manipulates a pig by lifting it. Diagnostics like inserting an agent-​oriented adverbial (e.g., ON -​P URPOSE ) can help us distinguish between the two types of predicates. Use of such an adverbial in the latter type of predicates leads to ungrammaticality (23b). Even if one adopts the syntactic analysis of Benedicto & Brentari, these data show that the head f1, which introduces an external argument in the specifier of f1P, does not necessarily require it to be an agent. (23) Handling classifier, transitive and non-​agentive          (TJSL) a. C AR   CRAN E i  PIG j  be-​locateda+CL w/​e-​j,  aliftb+CL handle-​i //​ CL w/​e-​j ‘The crane lifts the pig.’ b. * ON-​P URPOSE   CAR   CRANE i  PIG j  be-​locateda+CL w/​e-​j,  aliftb+CL handle-​i //​ CL w/​e-​j ‘The crane lifts the pig on purpose’. In summary, data from HKSL and TJSL suggest that Benedicto & Brentari’s (2004: 765) proposal that “the syntactic status of the argument is determined by the classifier itself, not by the lexical properties of the verbal root” does not hold cross-​linguistically. At least whole entity classifiers, whole entity instrumental classifiers, and bodypart classifiers in these sign languages may be associated with either external or internal arguments in transitive as well as intransitive predicates.14 To account for this phenomenon, one proposal is to further categorize whole entity classifiers based on the [+human] or [+animate] feature. As shown, the human classifier (i.e., -​handshape) shows interesting syntactic behavior, which sets it apart from those whole entity classifiers that refer to a theme or an instrument. Similarly, a more detailed analysis is necessary to further identify the syntactic properties of the different types of handling classifiers. In the next section, we will discuss an event structure approach towards characterizing classifier predicates.

161

162

Gladys Tang, Jia Li, & Jia He

7.6  Syntactic structure of classifier predicates is built upon event structure As mentioned previously, Grose et al. (2007) argue against Benedicto & Brentari’s (2004) proposal that bodypart classifiers in ASL select unergative predicates with an external agentive argument only. First, Benedicto & Brentari claim that unergative predicates are generally intransitive and atelic; however, Grose et al. find that predicates involving bodypart classifiers in ASL are transitive and may be atelic or telic. In the telic predicate in (24), the -​handshape for ‘AN IMAL_​FACE’ (i.e., ‘head’) combines with the verbal root B OW, where the orientation change (movement at the wrist joint) signals a morphological EndState marking of a telic event. In other words, the ‘head’ becomes the internal argument over which the bowing event operates, and event delimitation is suggested by a change in wrist position (Event Visibility Hypothesis, Wilbur 2003). (24)

FOX  

AN IMAL-​FACE+BOW

fox  head-​bow ‘The fox bowed its head.’

(ASL, Grose et al. 2007: 1275)

Therefore, predicates which involve bodypart classifiers are transitive with an internal argument that is “semantically restricted to being a ‘part’ of the external argument” (Grose et al. 2007: 1263). Further, the authors correlate two movement types in classifier predicates with telicity:  motion/​active movements are associated with telic events, and manner/​imitative movements with dynamic atelic events. Among them, changes in the location of an argument and in the orientation of a bodypart may visibly signal a natural endpoint of an event. Second, bodypart classifiers have morphophonological specifications for handpart and selected fingers, the former for external and the latter for internal arguments, similar to handling classifiers in transitive predicates. Based on these observations, Grose et al. argue that the unergative-​unaccusative alternation, as proposed by Benedicto & Brentari, should be reanalyzed as transitive-​unaccusative alternation, and consequently, (24) is similar to (25a) in English, representing a transitive predicate: (25) a. She bowed her head. b. She bowed. Instead of adopting a purely syntactic analysis like Benedicto & Brentari’s, Grose et al. (2007) take a different approach and propose that the syntactic representation of classifier predicates is generated from underlying conceptual structures for events and sub-​ events (see Figure 7.10). Following Pustejovsky (1991, 1995), they argue that verbs are underspecified for their argument structure and event types, and that the presence/​ absence of an internal argument determines their argument structure and event telicity. If an internal argument is present, the event is telic; otherwise it is atelic. In other words, “event structures are conceptual structures that generate syntactic structures, not the other way round” (Grose et al. 2007: 1265). There is typological variation in terms of the mapping function between the core meaning of sub-​events and the lexical verbs. English is less transparent because a lexical verb like ‘break’ may encompass multiple sub-​events of ‘cause’ and ‘result’. In contrast, ASL is more transparent in terms of mapping an event onto its sub-​event components in syntax.

162

163

Classifiers: theoretical perspectives

To capture the mapping from sub-​event structures to syntax, Grose et  al. follow Ramchand’s (2008) ‘first phase syntax’ proposal, according to which verbs are decomposed into initiation, process, and result sub-​events, and each sub-​event generates its own verb phrase with its respective argument role in the specifier position, that is, Initiator, Undergoer, and Resultee. They maintain Benedicto & Brentari’s proposal that f1P and f2P, which they re-​label as fCL1P and fCL2P, are the loci of classifier predicate generation involving an external and internal argument, respectively. The distribution of classifiers to fCL1P and f CL2P also largely follows Benedicto & Brentari’s analysis, except that fCL2P, the locus for generating a predicate involving an internal argument, allows also bodypart classifiers, in addition to handling classifiers and whole entity classifiers. As such, the verb phrase VIP that the functional head fCL10 dominates has properties for causation and event initiation, and the external argument (i.e., the Initiator) occupies the specifier of VIP; fCL20 also dominates two VPs, renamed as VUP and VRP (‘U’ referring to ‘undergo’ and ‘R’ to ‘result’), to accommodate telicity and the thematic roles of Undergoer and Resultee in the relevant specifiers of the sub-​event structures. Hence, Spec-​head agreement between the argument in the specifier and the respective classifier morphemes that head these series of VPs ensures that the morphological and phonological specifications relevant to the verb phrase are assigned to the argument, thus removing the need for verb movement and abstract feature checking in syntax.

Figure 7.10  Syntactic structure based on three sub-​events (Grose et al. 2007: 1278)

As for the number of functional projections to accommodate classifier morphemes, Benedicto & Brentari’s and Grose et al.’s analyses are largely the same, having two separate functional projections for the external and internal arguments. This differs from Lau’s (2002, 2012) syntactic analysis, in which the head of VCLP bears the burden of distinguishing a transitive from an intransitive predicate through the [± A] value of an agentivity feature (see Section 7.5.2.1, Figure  7.7). Additionally, Grose et  al.’s event structure approach, which is based on Ramchand (2008), enables systematic Spec-​head agreement between the argument and the classifier at specific tiers of phrasal projections based on sub-​event structure. In the next and final section, we will summarize some recent attempts to examine classifier predicates from a formal semantics perspective; of particular importance is the incorporation of the iconic properties of classifiers into the analysis. 163

164

Gladys Tang, Jia Li, & Jia He

7.7  Semantic analyses of classifier predicates Some analyses of classifiers or classifier predicates within the framework of formal semantics have emerged in recent years. Cecchetto & Zucchi (2006) propose that motion predicates are ‘indexical predicates’ the interpretation of which is context-​dependent and analogous in meaning to ‘moves in a way similar to this’. Zucchi (2012:  730) further suggests that the handshape component is linguistic due to its arbitrary nature, while location and movement are non-​linguistic, as they are gradient with infinite manifestations in the signing space (see also Zwitserlood, Chapter  8, for discussion). Irrespective of location and movement being gradient, he suggests that they play the role of demonstration in motion and locative predicates, offering a “reference for an indexical term in the logical representation of the predicate” (Zucchi 2012: 730). Hence, the logical interpretation of a motion predicate (e.g., move+C L w/​e ‘a vehicle moves’) in (26) shows that there is a light verb M OV E that merges with a classifier for vehicles to form the linguistic structure of the predicate. By conjunction, the function similar incorporates a demonstration as a context-​dependent event modifier, meaning that the predicate in the event of ‘a vehicle in motion’ is demonstrated by the movement of the classifier (i.e., ‘clvehicle’), a non-​linguistic component of the predicate. As demonstration, movement offers a reference by depicting the kind of motion the vehicle undergoes (see also ‘depicting verbs’ in Liddell (2003) and ‘body partitioning’ in Dudis (2004)). At the discourse level, when producing this motion predicate, the signer assumes a correspondence between regions of the signing space and regions of real space in which the real-​ world object denoted by the classifier moves. (26) λe[move(xi,vehicle, e) ˄ similars→l(dthat[the movement of clicA ,vehicle], e)]                           (Zucchi 2012: 730) Davidson (2015) further elucidates the theory of demonstration, arguing that it potentially offers a unified account for facets of verbal iconicity in quotations and classifier predicates in sign languages. According to her, demonstration is defined as a kind of performance, that is, the speaker/​signer performs some aspect of an utterance (by means of quotation) or an action (by means of a classifier predicate). As performance, it may incorporate properties like affect, facial expressions, gestures, etc. into the domain of expression. To interpret such ‘events’ in formal semantics, Davidson proposes that demonstrations are type d, a subtype of natural language objects that may cover spoken, written, signed, or gestural demonstrations. She defines demonstration-​of as a predicate that takes two arguments, demonstrations (type d) and events (type e), in that “a demonstration d is a demonstration of e (i.e., demonstration(d,e) holds) if d reproduces properties of e and those properties are relevant in the context of speech” where the properties can be words, intonation, facial expressions, and/​or gestures that are relevant in the context of the event (Davidson 2015:  487). Following Zucchi (2012), Davidson argues that classifier predicates are light verbs that require a demonstrational argument. The verb root has semantics analogous to the English ‘be+like’ construction plus all other semantics from light verbs like MOVE , LOCATE , MANIPULATE , and EXTEND-​T O to denote the subject’s movement, location, manner of manipulation, and extent (Davidson 2015: 494). Seen in this light, classifier predicates introduce a demonstration as an event modifier through a combination of movement and location, the two non-​linguistic units; and the demonstration is ‘carried out’ by the classifier, a linguistic unit. Furthermore, 164

165

Classifiers: theoretical perspectives

she follows Benedicto & Brentari’s (2004) proposal that the classifier handshape rather than the light verb determines the argument structure of the predicate. Therefore, a size and shape classifier has three arguments: a demonstration, an event, and an experiencer, while a handling classifier has four: a demonstration, an event, an agent, and a theme. As such, a transitive predicate that involves a handling classifier like (27a) will have a logical interpretation like (27b). (27) a. B OOK  CL-​C-​M OVE b. [[the gestural movement of the hand]] = d1 [[C L -​C-​M OVE ]] = λdλyλxλe [agent(e,x) ^ theme(e,y) ^ chunky(y) ^ moving(e) ^ demonstration(d,e) [[C L -​C-​M OVE (movement of the hand)]] = λyλxλe [agent(e,x) ^ theme(e,y) ^ chunky(y) ^ moving(e) ^ demonstration(d1,e) [[BOOK CL -​C-​M OVE (movement of the hand)]] = λxλe [agent(e,x) ^ theme(e,book) ^ chunky(book) ^ moving(e) ^ demonstration(d1,e) Existential closure = ∃e ∃x [agent(e,x) ^ theme(e,book) ^ chunky(book) ^ moving(e) ^ demonstration(d1,e)                             (ASL, Davidson 2015: 496) Davidson provides a few pieces of empirical evidence to support her demonstration analysis. For instance, she argues that argument alternation between a transitive predicate and an unaccusative predicate is well-​accounted for using an event-​based semantics, where a gestural movement of the hand demonstrates whether an agent moves the book while handling it or the book moves. Another piece of evidence comes from the language production of child bimodal bilinguals, who sometimes use vocal gestures for demonstration when they produce classifier predicates in spontaneous dialogues, and from cases of code blending in which the verb from a spoken language blends with a classifier predicate which demonstrates the action. The thrust of Davidson’s (2015) proposal is that quotations and classifier predicates do incorporate their iconic elements in much the same way, in the sense that both use demonstrations as event modification, based on the concepts of event modification and context-​dependent demonstration discussed in Clark & Gerrig (1990). As the principle of parsimony requires that a language uses the same strategy to encode the function of event modification through demonstration, she argues that, in sign languages, it is role shift that unifies these two forms of verbal iconicity, via instantiating quotations as a form of language demonstration or classifier predicates as a form of action demonstration, as shown in the examples in (28). Note that in the literature, role shift is also referred to as constructed action, action role shift, or direct action reports.15 It involves primarily changes in the signer’s facial expressions and eye gaze, sometimes in combination with head position and body lean, to assume another character in order to ‘report’ directly on their language (28a), thoughts (28b), or actions (28c). In these examples, the reports are indexed with ‘a’, as they are from the perspective of ‘Mary’. (28) a. Direct Language Report                     M ARY a 

S AY  

IX 1 

      a 1-​WATCH -​b

‘Mary said “I was watching it”.’ 165

166

Gladys Tang, Jia Li, & Jia He

b. Direct Attitude Report    

       

M ARY a 

c.

TH IN K  



      a

IX 1 

H APPY

‘Mary thought “I was happy”.’ Direct Action Report       

          a

M ARY a 

1-​WATCH -​b

‘Mary was watching it.’      

(ASL, Davidson 2015: 499)

Characteristically, these reports adopt a first-​person pronoun, an indexical expression, the interpretation of which is context-​dependent, that is, IX 1 in (28ab) and the first-​person marker on the verb in (28ac) refer to ‘Mary’, the subject of the clause, but not ‘the signer’. Note that the agreement verb (‘inflecting verb’ in Davidson’s analysis) WATCH under action reports (e.g., in (28c)) is lexical, and that the role shift is interpreted iconically, where the signer tries to demonstrate some aspects of the behavior of the subject (i.e., Mary). According to Davidson, action reports introduced by agreement verbs, like (28c), may be analyzed as classifier predicates that embody a ‘body classifier’ (Supalla 1982), that is, using the body of the signer to take on the role of another person. (29a) offers another example of an action report, in which the body classifier demonstrates the predicate LOOK -​U P -​AT through role shift, which Davidson argues to be similar to the classifier predicate in (29b), which involves a bodypart classifier referring to the man’s head turning up.16 In other words, both agreement verbs with role shift and classifier predicates are action reports, which parallel language reports with role shift in introducing demonstration as event modification.17 What differs between them is that demonstration is introduced as an argument for language reports and as a predicate modifier for action reports.      

(29) a. b.

M AN a MOVIE  

    a

LOOK -​U P-​A T

‘The man looked up at the movie.’ M AN CL-​S (move upward) ‘The man’s head turned up (towards the screen).’  

  (ASL, Davidson 2015: 508)

At this juncture, it has to be pointed out that Davidson’s demonstration analysis of action reports differs from the proposal made by other researchers, who take role shift to involve a context-​shifting operator which applies to an embedded clause and which switches the interpretation of indexicals within its domain (Quer 2005; Herrmann & Steinbach 2012; Schlenker 2017). According to Davidson, the context-​shifting operator analysis for action reports hinges much upon how researchers treat the first-​person indexical materials under role shift. She points out that there is a puzzling discrepancy in interpretation for potential action reports involving first-​person marking. In Schlenker’s (2017) French Sign Language (LSF) example in (30), first-​person agreement is licensed and obtains a shifted action report interpretation.

166

167

Classifiers: theoretical perspectives

(30)

REC ENTLY WOLF IPH ON E c FIN D c H APPY . SHEEP a IX -​b

   

b-​C ALL -​a.

    rsb

IX -​b IPHONE  1-​ S H OW-​C L -​a

‘Recently the wolf was happy to find an iPhone. He called the sheep. He [= the wolf] showed the iPhone to him (the wolf in fact showed the iPhone to the sheep).’     (LSF, Schlenker 2017: 31) A similar phenomenon is observed in ASL (31a); however, an overt first-​person pronoun is ungrammatical under action report interpretation, as shown in (31b).    

(31) a.

F RIE N D a 

topic

     

    a

OLYMPICS b 1-​ WATCH -​b

‘My friend watched the Olympics.’  

b.

F RIE N D a (S AY ) 

  topic       

OLYMPICS b 

      a

1-​WATCH -​b ‘My friend was like “The Olympics, I watch”.’ *‘My friend watched the Olympics.’     IX 1

(ASL, Davidson 2015: 506)

Schlenker (2017) also stipulates that action reports do not contain first-​person pronouns for shifted interpretation. In other words, (31b) only receives an attitude interpretation. One assumption of the context-​shifting operator analysis is that it applies to a full syntactic embedded clause and semantic proposition. The demonstration account, on the other hand, does not require an embedded clause with a subject, since the demonstrated predicate is the main predicate, and this explains why the presence of a first-​person pronoun does not yield an action report but an attitude report because action reports assume a mono-​clausal analysis. As such, the first-​person pronoun in (31b) becomes an extraneous syntactic subject, a position already filled by FRIEND . The picture is getting more complicated once more sign languages are brought into the discussion. Quer (2018) argues that using the presence of a first-​person pronoun to differentiate between action and attitude report interpretation is not as clear-​cut as Schlenker proposes. He suggests that LSC provides evidence for cross-​linguistic variation because the absence of a first-​person pronoun, as in (32), does not lead to an action report interpretation. Crucially, this ­example –​just like the alternative with IX 1 preceding FED-​U P  –​ can only receive an attitude reading.  

(32)

P E RSON C L :B 3a-​1 

rs:a

FED -​U P

‘A person approached me (and said): “I’m fed up!” /​she was fed up.’                                             (LSC, Quer 2018: 281) To conclude this section, we have summarized a formal semantic analysis of classifier predicates put forward by Davidson (2015). While she agrees with Schlenker’s (2017) proposal to incorporate iconicity for a unified analysis of language reports, attitudinal reports as well as action reports via role shift, she disagrees with the claim that role shift is the overt realization of the scope of an operator which shifts the context of interpretation for indexicals in an embedded clause. Additional evidence from LSC shows that the absence of first-​person pronouns does not necessarily lead to an action report 167

168

Gladys Tang, Jia Li, & Jia He

interpretation. Davidson proposes that classifier predicates can be analyzed as one type of action reports, and that both quotations and classifier predicates introduce a demonstration into the predicate as event modification; this introduction can be achieved via role shift, an iconic element commonly observed in sign languages. This event-​based semantics approach takes the role-​shifted predicate to be the main verb which introduces either an argument in language reports or an event modifier in action reports.

7.8  Conclusion In this chapter, we have summarized different formal approaches towards characterizing the morphosyntactic and argument-​structural properties of classifiers in sign languages. The initial suggestion by Supalla (1982) that classifiers are agreement markers has triggered research that sets out to identify the nature of agreement involved in classifier predicates. According to some accounts, agreement between the classifier and the noun argument may be structurally defined as Spec-​head agreement for their sharing of semantic as well as morphosyntactic properties. This mechanism explains how the function of classification (Aikhenvald 2000) is obtained when the handshape component of these ‘spatial predicates’ is assumed to be a verbal classifier. Another consensus, namely that classifiers are associated with arguments of predicates, has interesting theoretical consequences. Some researchers adopting Distributed Morphology have proposed that features of different classifiers are merged within specific root phrases until a categorical vP is formed, and that subsequently, they further merge with the relevant subject/​object agreement nodes at Morphological Structure. Another approach is to invoke specific functional projections headed by classifiers for internal and external arguments, with the assumption that the lexical properties of a verb contain no information about the argumental status of the nominal arguments. That classifiers determine argument structure and enter into specific alternation patterns has triggered research leading to some conflicting results. At least cross-​linguistic data from HKSL and TJSL, as well as some NGT data, show that direct mapping between classifiers and argument structure is not as straightforward as assumed by Benedicto & Brentari (2004). Further research on this association, involving also a broader range of sign languages, is necessary in order to appreciate the formal complexity that underlies classifiers and the argument structure of predicates. One corollary issue arising from research on classifier predicates is how to analyze those frozen signs that have components that resemble classifiers or even body locations. The insights from Zwitserlood’s (2003, 2008) root compound analysis and Meir’s (2001) noun incorporation analysis are useful points of departure for future investigation. Meir’s instrument incorporation analysis for signs like SPOON-​E AT in ISL is insightful. She hypothesizes that within a noun incorporation analysis, classifiers do not necessarily introduce new referents into the discourse because it is possible for them not to refer to specific referents (e.g., a specific spoon) but a specific type of event with a generic object. In other words, under the noun incorporation analysis, the classifier for SPOON is kind-​ but not entity-​denoting, as an instrument participating in an event with a specific manner of eating (i.e., eating with a spoon). The second corollary issue is how to analyze predicates involving instrumental classifiers and bodypart classifiers, as the findings so far are not fully conclusive. The proposed distinction between handling instrumental and whole entity instrumental classifiers is helpful; however, the findings suggest that the latter type does not necessarily 168

169

Classifiers: theoretical perspectives

select an intransitive but a transitive predicate. As results from HKSL and TJSL indicate, agentive non-​manuals scope over the predicates, indicating that a purely intransitive analysis does not work. Additionally, whether bodypart classifiers select a transitive or an intransitive unergative predicate is debatable. While the BOW example may not be a good litmus test, a classifier predicate like ‘walking with two feet stepping on the ground one by one’, which shows manner of walking, suggests that the predicate is an activity and hence unergative and atelic, unless one argues that this manner verb is just a modifier of an underlying MOVE predicate –​thus, the classifier bears no argumental properties. Last, one should not ignore emerging studies that take classifier predicates as depictions of events and possibly states, thus inviting formal semantics analysis, such as incorporation of a demonstration component for events and actions. As our understanding of classifier predicates remains to be ascertained, tackling the phenomenon from different perspectives is definitely desirable.

Notes 1 Some experimental studies on these subcomponents reveal that movement and especially location are more gradient, thus leading to a high degree of convergence among signers of different sign languages, and even between Deaf signers and hearing non-​signing gesturers. In contrast, handshape is more at the categorical and conventionalized end of the continuum (Emmorey & Herzig 2003; Schembri et al. 2005; also see Zwitserlood, Chapter 8). 2 Owing to space limitations, we refer the reader to Liddell (2003) for a fuller description of how to capture the iconic nature of depicting verbs within his model. 3 Supalla (1982) also includes orientation affixes to attach with the classifier affix to encode the orientation of the noun referent in the real world or against another noun referent. 4 A syntactic, head movement account has been put forward by Baker (1988), which Rosen argues against. According to her, this approach fails to account for the doubling and stranding phenomenon arising from Classifier Noun Incorporation. 5 See Brentari & Benedicto (2004: 775) for a similar claim in ASL, which is also reported in de Lint (2010). 6 Moreover, some sign languages, like Japanese Sign Language (Fischer 1996) and Taiwan Sign Language (Smith 1990), do have distinct hand configurations for masculine and feminine referents. 7 Zwitserlood argues that the term ‘frozen signs’ is misleading because, at least in NGT, such signs may be derived from classifier predicates and both may coexist; also, some frozen signs are in fact productively created on their own. As such, frozen signs should be renamed as (semantically) motivated signs, a term proposed by van der Kooij (2002). 8 Lau uses the label SASS in her model, which would be categorized as whole entity classifiers in the literature nowadays. 9 The transitive-​transitive alternation is called ‘manner verb alternation’ in de Lint (2010). 10 Extension classifiers in HKSL can be intransitive, as in (i), and causative transitive, as in (ii). (i) (ii) 

L A S T ^ N I GHT, (IX - ​1 ) SE E a L IGHT E NING ,

lighteninga+C L ext, lighteningb+C L ext ‘Last night, I saw lightening, it struck here and there.’ HOUSEi be-​locatedc+CLw/e- i, LIGHTENINGj strike+CLext-​j //​ CLw/​e-​i, fall_​down+CLw/e- i. ‘It struck a house down there.’

Owing to space limitation, we refer the reader to Tang & Gu (2007) for details. 11 The simultaneous use of classifier predicates on both hands is indicated by ‘//​’ in the glosses; classifier predicates on the dominant hand appear to the left and classifier predicates on the non-​ dominant hand to the right of the //​. 12 It seems that it is possible in both HKSL and TJSL to adopt either a whole entity or a bodypart classifier in unergatives. According to the judgments of native signers, in these predicates, the focus is on the humans or their bodyparts. In a simple ‘motion’ predicate, a whole entity classifier

169

170

Gladys Tang, Jia Li, & Jia He is preferred. Focus on how the legs behave in the motion predicate will lead to the selection of a bodypart classifier. 13 This type of handling classifiers is different from the handling instrumental classifiers described by Benedicto & Brentari (2004: 777), which are defined as an instrument handled by the hand of a human, thus subsumed under the handling classifier category. 14 For further discussion of exceptions, that is, non-​canonical mappings of classifier type and argument structure (in DGS, Russian Sign Language, Sign Language of the Netherlands, and Kata Kolok (Bali)), see Kimmelman et al. (2019). 15 Readers are encouraged to consult Cormier et al. (2015) for their suggestion for a systematic coding of constructed action. According to them, constructed action should be distinguished from role shift. Role shift is characterized as a shift between roles (e.g., from a narrator role to a character role, or simultaneous combination of roles), embedded in a stretch of constructed action (i.e., signing discourse), which is further divided into overt, reduced, and subtle. 16 Note that for Davidson’s example in (29b), which involves no role shift, we assume that the presence of iconic elements (like a bodypart classifier) still performs the function of demonstration in an action report. 17 Davidson treats role shift here as a morpheme which simultaneously combines with the other verb morphology of classifier predicates.

References Aikhenvald, Alexandra Y. 2000. Classifiers:  A typology of noun categorization devices. Oxford: Oxford University Press. Aikhenvald, Alexandra Y. 2003. Classifiers in spoken and in signed languages:  How to know more. In Karen Emmorey (ed.), Perspectives on classifier constructions in sign languages, 87–​90. Mahwah, NJ: Lawrence Erlbaum. Allan, Keith. 1977. Classifiers. Language 53(2). 285–​311. Anderson, Stephen R. 1992. A-​morphous morphology. Cambridge: Cambridge University Press. Baker, Mark C. 1988. Incorporation: A theory of grammatical function changing. Chicago: University of Chicago Press. Benedicto, Elena & Diane Brentari. 2004. Where did all the arguments go?: Argument-​changing properties of classifiers in ASL. Natural Language & Linguistic Theory 22(4). 743–​810. Benedicto, Elena, Sandra Cvejanov, & Josep Quer. 2007. Valency in classifier predicates: A syntactic analysis. Lingua 117(7). 1202–​1215. Borer, Hagit. 2005. In name only: Structuring sense. Oxford: Oxford University Press. Brentari, Diane & Elena Benedicto. 1999. Verbal classifiers as heads of functional projections: Evidence from ASL. In Sonya F. Bird, Andrew Carnie, Jason D. Haugen, & Peter Norquest (eds.), Proceedings from the 18th West Coast Conference on Formal Linguistics, 69–​81. Somerville, MA: Cascadilla Press. Cecchetto, Carlo & Sandro Zucchi. 2006. Event descriptions and classifier predicates in sign languages. Paper presented at the Workshop on Lexicon, Syntax, and Events in Honor of Anita Mittwoch. Jerusalem: Hebrew University of Jerusalem, July 2006. Clark, Herbert H. & Richard J. Gerrig. 1990. Quotations as demonstrations. Language 66(4). 764–​805. Cogill-​Koez, Dorothea. 2000. Signed language classifier predicates: Linguistic structures or schematic visual representation? Sign Language & Linguistics 3(2). 153–​207. Corbett, Greville G. 1991. Gender. Cambridge: Cambridge University Press. Cormier, Kearsy, David Quinto-​Pozos, Zed Sevcikova, & Adam Schembri. 2012. Lexicalisation and de-​lexicalisation processes in sign languages: Comparing depicting constructions and viewpoint gestures. Language & Communication 32(4). 329–​348. Davidson, Kathryn. 2015. Quotation, demonstration, and iconicity. Linguistics and Philosophy 38(6). 477–​520. de Lint, Vanja. 2010. Argument structure in classifier constructions in ASL:  An experimental approach. Utrecht: University of Utrecht MA thesis. de Lint, Vanja. 2018. NGT classifier constructions: An inventory of arguments. Sign Language & Linguistics 21(1). 3–​39.

170

171

Classifiers: theoretical perspectives DeMatteo, Asa. 1977. Visual imagery and visual analogues in American Sign Language. In Lynn Friedman (ed.), On the other hand:  New perspectives on American Sign Language, 109–​136. New York: Academic Press. Dudis, Paul G. 2004. Depiction of events in ASL: Conceptual integration of temporal components. Berkeley, CA: University of California PhD dissertation. Edmondson, William H. 1990. A non-​concatenative account of classifier morphology in signed and spoken languages. In Siegmund Prillwitz & Tomas Vollhaber (eds.), Current trends in European sign language research: Proceedings of the Third European Congress on Sign Language Research, 187–​202. Hamburg: Signum Press. Emmorey, Karen & Melissa Herzig. 2003. Categorical versus gradient properties of classifier constructions in ASL. In Karen Emmorey (ed.), Perspectives on classifier constructions in signed languages, 222–​246. Mahwah, NJ: Lawrence Erlbaum. Engberg-​Pedersen, Elisabeth. 1993. Space in Danish Sign Language:  The semantics and morphosyntax of the use of space in a visual language. Hamburg: Signum Press. Engberg-​Pedersen, Elisabeth. 2003. How composite is a fall? Adults’ and children’s descriptions of different types of falls in Danish Sign Language. In Karen Emmorey (ed.), Perspectives on classifier constructions in signed languages, 311–​332. Mahwah, NJ: Lawrence Erlbaum. Fischer, Susan D. 1996. The role of agreement and auxiliaries in sign language. Lingua 98(1–​3). 103–​119. Frishberg, Nancy. 1975. Arbitrariness and iconicity: Historical change in American Sign Language. Language 51(3). 696–​719. Geraci, Carlo & Josep Quer. 2014. Determining argument structure in sign languages. In Asaf Bachrach, Isabelle Roy, & Linnaea Stockall (eds.), Structuring the argument: Multidisciplinary research on verb argument structure, 45–​60. Amsterdam: John Benjamins. Grinevald, Colette. 2003. Classifier systems in the context of a typology of nominal classification. In Karen Emmorey (ed.), Perspectives on classifier constructions in signed languages, 91–​109. Mahwah, NJ: Lawrence Erlbaum. Glück, Susanne & Roland Pfau. 1998. On classifying classification as a class of inflection in German Sign Language. In Tina Cambier-​Langeveld, Aniko Lipták, & Michael Redford (eds.), Proceedings of ConSole VI, 59–​74. Leiden: SOLE. Glück, Susanne & Roland Pfau. 1999. A Distributed Morphology account of verbal inflection in German Sign Language. In Tina Cambier-​Langeveld, Aniko Lipták, Michael Redford, & Eric Jan van der Torre (eds.), Proceedings of ConSole VII, 65–​80. Leiden: SOLE. Grose, Donovan, Ronnie B. Wilbur, & Katharina Schalber. 2007. Events and telicity in classifier predicates: A reanalysis of body part classifier predicates in ASL. Lingua 117(7). 1258–​1284. Halle, Morris & Alec Marantz. 1993. Distributed Morphology and the pieces of inflection. In Ken Hale & Samuel J. Keyser (eds.), The view from building 20, 111–​176. Cambridge, MA: MIT Press. He, Jia & Gladys Tang. 2013. A case study on instrument classifier predicates in Tianjin Sign Language. Paper presented at International Conference on Sign Linguistics and Deaf Education in Asia 2013. Hong Kong: The Chinese University of Hong Kong, January 30–​February 2, 2013. He, Jia & Gladys Tang. 2018a. Instrument in Hong Kong Sign Language and Tianjin Sign Language classifier predicates. Paper presented at Argument Structure Across Modalities. Amsterdam: University of Amsterdam, February 1–​2, 2018. He, Jia & Gladys Tang. 2018b. Causativity and transitivity in classifier predicates in HKSL and TJSL. Paper presented at Formal and Experimental Advances in Sign Language Theory (FEAST 7). Venice: University of Venice, June 18–​20, 2018. Herrmann, Annika & Markus Steinbach. 2012. Quotation in sign languages  –​a visible context shift. In Isabelle Buchstaller & Ingrid van Alphen (eds.), Quotatives: Cross-​linguistic and cross-​ disciplinary perspectives, 203–​228. Amsterdam: John Benjamins. Kegl, Judy. 1990. Predicate argument structure and verb-​class organization in the ASL lexicon. In Ceil Lucas (ed.), Sign language research: Theoretical issues, 149–​175. Washington, DC: Gallaudet University Press. Kimmelman, Vadim, Vanja de Lint, Connie de Vos, Marloes Oomen, Roland Pfau, Lianne Vink, & Enoch O. Aboh. 2019. Argument structure of classifier predicates: Canonical and non-​canonical mappings in four sign languages. Open Linguistics 5. 332–​353. Lau, Sin Yee Prudence. 2002. Causative alternation in Hong Kong Sign Language. Hong Kong: The Chinese University of Hong Kong MPhil thesis.

171

172

Gladys Tang, Jia Li, & Jia He Lau, Sin Yee Prudence. 2012. Serial verb constructions in Hong Kong Sign Language. Hong Kong: The Chinese University of Hong Kong PhD dissertation. Liddell, Scott K. 2003. Grammar, gesture, and meaning in American Sign Language. Cambridge: Cambridge University Press. Liddell, Scott K. & Robert Johnson. 1987. An analysis of spatial-​locative predicates in American Sign Language. Paper presented at the Fourth International Symposium on Sign Language Research. Lappeenranta, Finland. Lillo-​Martin, Diane. 1986. Two kinds of null arguments in American Sign Language. Natural Language & Linguistic Theory 4(4). 415–​444. Lillo-​Martin, Diane. 1991. Universal Grammar and American Sign Language: Setting the null argument parameter. Dordrecht: Kluwer. Meir, Irit. 2001. Verb classifiers as noun incorporation in Israeli Sign Language. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology 1999, 299–​319. Dordrecht: Kluwer. Pustejovsky, James. 1991. The syntax of event structure. Cognition 41. 47–​81. Pustejovsky, James. 1995. The generative lexicon. Cambridge, MA: MIT Press. Quer, Josep. 2005. Context shift and indexical variables in sign languages. In Efthymia Georgala & Jonathan Howell (eds.), Proceedings from Semantics and Linguistic Theory (SALT) 15, 152–​168. Ithaca, NY: Cornell University. Quer, Josep. 2018. On categorizing types of role shift in sign languages. Theoretical Linguistics 44(3–​4). 277–​282. Ramchand, Gillian C. 2008. Verb meaning and the lexicon: A first-​phase syntax. Cambridge: Cambridge University Press. Reinhart, Tanya. 2000. The theta system: Syntactic realization of verbal concepts. OTS Working Papers in Linguistics. Utrecht: Utrecht University, OTS. Reinhart, Tanya. 2002. The theta system –​an overview. Theoretical Linguistics 28(3). 229–​290. Rosen, Sara T. 1989. Two types of noun incorporation: A lexical analysis. Language 65(2). 294–​317. Sandler, Wendy & Diane Lillo-​ Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press. Schembri, Adam. 2003. Rethinking ‘classifiers’ in signed languages. In Karen Emmorey (ed.), Perspectives on classifier constructions in signed languages, 3–​ 34. Mahwah, NJ:  Lawrence Erlbaum. Schembri, Adam, Caroline Jones, & Denis Burnham. 2005. Comparing action gestures and classifier verbs of motion: Evidence from Australian Sign Language, Taiwan Sign Language, and nonsigners’ gestures without speech. Journal of Deaf Studies and Deaf Education 10(3). 272–​290. Schick, Brenda. 1987. The acquisition of classifier predicates in American Sign Language. West Lafayette, IN: Purdue University PhD dissertation. Schick, Brenda. 1990. Classifier predicates in American Sign Language. International Journal of Sign Linguistics 1(1). 15–​40. Schlenker, Philippe. 2017. Super monsters I:  Attitude and action role shift in sign language. Semantics and Pragmatics 10(9). 1–​65. Shepard-​Kegl, Judy. 1985. Locative relations in American Sign Language: Word formation, syntax, and discourse. Cambridge, MA: Massachusetts Institute of Technology PhD dissertation. Slobin Dan I., Nini Hoiting, Marlon Kuntze, Reyna Lindert, Amy Weinberg, Jennie Pyers, Michelle Anthony, Yael Biederman, & Helen Thumann. 2003. A cognitive/​functional perspective on the acquisition of ‘classifiers’. In Karen Emmorey (ed.), Perspectives on classifier constructions in signed languages, 271–​298. Mahwah, NJ: Lawrence Erlbaum. Smith, Wayne. 1990. Evidence for auxiliaries in Taiwan Sign Language. In Susan Fischer & Patricia Siple (eds.), Theoretical issues in sign language research, vol. 1: Linguistics, 211–​228. Chicago: University of Chicago Press. Supalla, Ted. 1982. Structure and acquisition of verbs of motion and location in American Sign Language. San Diego, CA: University of California PhD dissertation. Supalla, Ted. 1986. The classifier system in American Sign Language. In Colette Craig (ed.), Noun classes and categorization, 181–​214. Amsterdam: John Benjamins. Tang, Gladys & Yang Gu. 2007. Events of motion and causation in Hong Kong Sign Language. Lingua 117(7). 1216–​1257. van der Kooij, Els. 2002. Phonological categories in Sign Language of the Netherlands: The role of phonetic implementation and iconicity. Utrecht: University of Utrecht PhD dissertation.

172

173

Classifiers: theoretical perspectives Wilbur, Ronnie B. 2003. Representations of telicity in ASL. Proceedings from the Annual Meeting of the Chicago Linguistics Society 39(1). 354–​368. Zucchi, Sandro. 2012. Formal semantics of sign languages. Language and Linguistics Compass 6(11). 719–​734. Zwitserlood, Inge. 1996. Who’ll handle the object? Utrecht: Utrecht University MA thesis. Zwitserlood, Inge. 2003. Classifying hand configurations in Nederlandse Gebarentaal (Sign Language of the Netherlands). Utrecht: Utrecht University PhD dissertation. Utrecht: LOT. Zwitserlood, Inge. 2008. Morphology below the level of the sign:  frozen forms and classifier predicates. In Josep Quer (ed.), Signs of the time:  Selected papers from TISLR 8, 251–​272. Seedorf: Signum. Zwitserlood, Inge. 2012. Classifiers. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 158–​186. Berlin: De Gruyter Mouton. Zwitserlood, Inge & Ingeborg van Gijn. 2006. Agreement phenomena in Sign Language of the Netherlands. In Peter Ackema, Patrick Brandt, Maaike Schoorlemmer, & Fred Weerman (eds.), Arguments and agreement, 195–​229. Oxford: Oxford University Press.

173

174

8 CLASSIFIERS Experimental perspectives Inge Zwitserlood

8.1  Introduction While classifier constructions are among the most thoroughly investigated sign language phenomena, they appear also to be among the most controversial ones. As was stated in Chapter 7, these constructions have been subject to extensive debates about terminology, sub-​types, and linguistic status. This chapter presents an overview of the principal experimental investigations exploring the nature of classifier constructions. Most of these studies concentrate on the question to what extent elements in classifier constructions should be considered linguistic rather than relate to the theoretical analyses described in the previous chapter. Indeed, it is important that such information is available to confirm or challenge theories. For example, some analyses claim that the type of classifier is related to the argument structure of the predicate. Yet, what are the theoretical implications if we find that only the classifiers (i.e., handshapes) but not other elements (locations, movements) in these constructions are linguistic? This chapter addresses studies of the acquisition of classifier constructions by children and second language learners of a sign language (Section 8.2); a study comparing classifier constructions in sign languages with similar gestures of hearing non-​signers (Section 8.3); psycholinguistic studies concerned with similarities and differences in categoricity when it comes to the interpretation of components of classifier constructions by signers and non-​signers (Section 8.4); and neurolinguistic investigations looking at the processing of classifier constructions (in contrast to lexical signs) when produced and interpreted by healthy signers, signers who have experienced brain damage, and non-​ signers (Section 8.5). In Section 8.6, difficulties in interpreting and comparing the results are sketched, in particular with respect to the differences in the definitions of and the underlying assumptions about the object of study. Section 8.7 summarizes and concludes the chapter. For ease of reading, the terms ‘entity’, ‘size and shape’, and ‘handling’ classifier constructions will be used throughout this chapter rather than the variety of terms used in the original literature.

174

175

Classifiers: experimental perspectives

8.2  Acquisition of classifiers 8.2.1  Classifier constructions in L1 acquisition Many investigations of child acquisition of classifier constructions have been undertaken, showing that while children already begin to use aspects of classifier constructions at a very young age, full mastery of these constructions is achieved late. Children’s productions of incorrect forms have been taken as evidence for the morphological complexity of these forms, in particular when younger children’s productions of parts of these constructions were systematically less often target-​like than those produced by older children. More recently, it has been suggested that this is an indication that only the components that are acquired late are linguistic units. Owing to space limitations, we will restrict the discussion to three studies, in all of which elicited production tasks were used, the baseline for the target classifier constructions was the performance of adult native signers on the same tasks, and large groups of children participated.1 The studies are less homogenous with respect to the definitions of the target structures, the focus of the studies, the circumstances in which the children acquired the sign language, and the dissimilarity of the stimuli (although in all studies these were visual, non-​linguistic). We will first describe the results, then briefly compare them. Bernardino (2006) investigated the acquisition of classifier constructions in Brazilian Sign Language (Libras) in a cross-​sectional study with 61 deaf children between 4 and 12 years old (four with deaf parents). All children were requested to answer questions asked in Libras about 27 pictures featuring different events, designed to elicit classifier constructions of varying complexity and with a range of classifiers. The targets were determined on the basis of data elicited from five fluent adult Libras signers using the same task (Bernardino et  al. 2004), and ratings of these data by four other native adult signers (Bernardino & Hoffmeister 2004). The mean percentage of classifier constructions was 80% in the responses of deaf children of deaf parents (56% target-​ like) and 52% in those of the deaf children of hearing parents (30% target-​like). The increase in correctness and the order of appearance of particular classifiers was similar in both groups (i.e., unmarked handshapes being used first in all participating children). Still, even in the oldest native signer children, the best performance is more than 30% below target. Bernardino (2006) suggests that the deaf children of hearing parents are delayed in their acquisition of classifier constructions and concludes that maturity and length of exposure are crucial for the process of classifier acquisition, and that linguistic complexity of the target constructions also plays an important role. An extensive investigation was conducted by De Beuzeville (2006), with 27 children aged between 4 and 10 who acquired Australian Sign Language (Auslan) as their native language. She used two tasks: the test battery for American Sign Language (ASL) (‘VMP task’), developed by Supalla (1982), Supalla et al. (unpublished) to elicit dynamic spatial descriptions involving entity classifiers, and a description games task (Schick 1987, 1990a, 1990b), developed to elicit static and dynamic spatial descriptions. The child data from the VMP test were compared to native Auslan adult data elicited from 25 adults (Schembri 2001; Schembri et al. 2005) with respect to the handshape, and to targets set for ASL (Supalla et al. unpublished) for movement and location. The baseline for the Schick task was formed by data elicited from five native Auslan adult signers. The results reveal that the youngest children already used many classifier constructions in their descriptions. Both tasks show an age-​related upward trend in (correct) use of classifier constructions. In general, the children scored higher on the Schick task than on 175

176

Inge Zwitserlood

the VMP task, which is attributed to the fact that the former concerned more common events and fewer different objects than the latter. The children, particularly the youngest, often omitted expression of the Ground objects with a classifier. Appropriateness2 of classifiers for Figure objects was higher than that for Ground objects, although this difference was almost evened out by age 10 (around 85%). Entity classifier constructions are not fully mastered even by 10-​year old children (only 75% appropriate). The movement components (manner, path, and direction) scored high from the start and increased to almost ceiling in the oldest group. The youngest group still struggled with location in entity classifier constructions, while there is a steady correctness score for handling classifier constructions of around 100% in all age groups.3 De Beuzeville’s (2006) results generally reflect those of Supalla (1982) and Schick (1987, 1990a, 1990b): the choice of classifier (in particular in entity classifier constructions) is less accurate overall than expression of location or motion. While the developmental lines observed in these studies can be interpreted as showing increasing mastery of the morphological complexity of the constructions, De Beuzeville (2006) indicates that other explanations are equally feasible, such as semantic relevance, cognitive development, and acquisition of the conventions of systems of visual representation, and suggests that it is more plausible that the developmental lines follow those of acquisition of visual representation (e.g., Cogill-​Koez 2000) rather than those of linguistic representation. This is in particular the case for location and some movement components, which she considers non-​discrete and analogue elements. Tang et al. (2007) explored the acquisition of (simultaneous) classifier constructions in Hong Kong Sign Language (HKSL) in 14 deaf children between 6 and 13 years old, only one of whom had been exposed to HKSL from birth. They were divided in three groups according to their pre-​tested level of HKSL proficiency. In this study, narrations were elicited from all children by six four-​picture comic strips with a simple story line, and their productions of classifier constructions were compared to those elicited from two native HKSL signing adults with the same stimuli. The adults appeared to use more classifier constructions (118) in total than the children (99 (highest level), 72, and 63 (lowest level)). Each child group produced classifier constructions of all types, and relatively more handling constructions than the adults. The children used classifiers for Ground objects much less frequently than the adults did. Also, in each type, they sometimes chose inadequate classifiers, in particular for entity classifiers, and the youngest two groups somewhat more often for Ground objects than for Figure objects. In contrast, the locations and motions expressed by the children were almost always accurate:  less than 2% were erroneous, with no difference between the groups. Tang et al. (2007) hypothesize that the error pattern reflects a stage in which children reanalyze signs that they treated initially as containing merely phonological elements as morphologically complex constructions. The studies by De Beuzeville (2006) and Tang et al. (2007) reveal that L1 learners master production of the locations and movements in classifier constructions earlier than the classifiers. Furthermore, all three studies, like earlier ones, show that even the oldest child participants do not yet produce completely adult-​like classifiers in these constructions, in particular producing errors with entity classifiers. These results reflect findings with respect to L1 acquisition of (numeral) classifiers in spoken languages (Gandour et  al. 1984; Carpenter 1991, 1992; Senft 1996): classifiers (in Thai and Kilivila) appear not to be fully mastered until age 10 or even 15.

176

177

Classifiers: experimental perspectives

A point of concern is that, although the Auslan and Libras studies explicate the targets for and scoring of classifiers, none of the three studies does so for location and movement. Also, it can be inferred from (descriptions of) the stimuli that alternative descriptions without classifier constructions would also be appropriate, even if we allow for the fact that each study has their own definition of classifier constructions. This makes the presence of confounds quite possible. We will return to these issues in Section 8.6.

8.2.2  L2 acquisition of classifier constructions To date, only a few studies have focused on L2 acquisition of sign language classifier constructions. Marshall & Morgan (2015) investigated the production and comprehension of classifier constructions by 11 hearing adults who had been learning British Sign Language (BSL) for two or three years. The production study featured picture pairs with the same objects. The first of each pair showed the object(s) in a particular spatial configuration, while the second picture (shown 3 seconds later) showed the same object(s) in a different configuration. Thirty pairs of stimuli concerned one, two, or three objects; ten pairs contained a multitude of objects (see Figure 8.1a,b for examples). Participants were asked to describe what had changed in the second picture. Targets were elicited from four native BSL signers, to which the parameters handshape, orientation, location, and movement, and the presence of an ‘anchor’ hand (for multiple entities) in the productions of the participants were compared. Seventy-​eight percent of participants’ responses contained a classifier construction, but only 32% of the two-​object constructions was correct, and only 25% of the multi-​object constructions. Production of the correct classifiers was particularly difficult, while that of orientations and locations was much more accurate. Also, the choice of classifier in the participants’ productions varied widely, in sharp contrast to that in the control group. In the comprehension task, participants were offered randomized trials consisting of sets of four pictures, each followed by a video clip in which the spatial configuration of one of the four pictures was described in BSL. Again, some trials concerned two (or three) objects, and others several of the same objects, in varying configurations (an example is shown in Figure 8.1c). The participants were asked to select the picture that best matched the description. All descriptions scored 90% correct, with no differences between the parameters. The overall conclusion of this study is that location and orientation are easier to learn than classifiers, especially in production, for the reason that acquisition of classifiers, as conventionalized forms, needs time and much exposure, while this is not necessary for location and orientation, as analogue elements. A recent study by Ferrara & Nilsson (2017) explores the skills in production of classifier constructions of 12 second-​year L2 students of Norwegian Sign Language (NSL), with two semi-​spontaneous spatial descriptions tasks, that is, a route description between two buildings and a spatial layout of the floor in a building. These were compared to similar descriptions by three NSL teachers. Results reveal that the proportion of classifier constructions in the responses of the controls (11%) was double that of the learners, and that the controls used more than twice the number of motion expressions. Like Marshall & Morgan (2015), Ferrara & Nilsson (2017) found that the learners’ expression of location was quite accurate, in particular for the dominant hand (79% target-​like), while target-​like expression of movement and especially orientation scored somewhat lower 177

178

Inge Zwitserlood (a)

(b)

(c)

Figure 8.1  Replications of stimuli in the production (a,b) and comprehension (c) studies (images after Marshall & Morgan 2015)

(73% and 67%, respectively). However, in contrast to the BSL study, the NSL learners used a surprisingly high proportion of target-​like choice of classifiers (81%). Boers-​Visker & Van den Bogaerde (2019) report on a longitudinal study of the development of use of space of two adults following a four-​year NGT interpreter training program. Each year they were interviewed on everyday life topics, with intervals of ten weeks. To have an idea of the kinds of spatial devices used by native signers, similar interviews were conducted with three Deaf L1 users. A code book was constructed for analysis of the data, and uncertainties that occurred during the coding process were discussed among the annotators and with linguists and/​or Deaf consultants. With regard to the use of classifier constructions, it was observed that the percentages were very low in both the L1 and L2 signers’ productions (mean 4,5% and 4,4%, respectively). With time, both L2 learners increasingly used classifier constructions, but at different paces. One learner tended to use more unsuitable spatial settings and inappropriate classifiers (handshape and/​or orientation) than the other, even in the fourth year of study (error rates are not provided). 178

179

Classifiers: experimental perspectives

Summarizing, Marshall & Morgan (2015) and Ferrara & Nilsson (2017) show quite adequate expression of location and orientation in the learners’ productions. However, while Marshall & Morgan, as well as Boers-​Visker & Van den Bogaerde (2019) for one of their participants, report that production of target-​like classifiers by the adult learners can be quite problematic, this seems less so in Ferrara & Nilsson’s study. As was stated in the previous section, although the results show some similarities, a good comparison of the results of the studies is difficult due to the lack of a unified definition of a classifier construction and differences in assessment of handshape, location, orientation, and movement. Furthermore, it is apparent that the nature of the task –​in these studies: comprehension versus production, and descriptions of controlled stimuli versus (semi)-​ spontaneous utterances –​also has a large influence on the results.

8.3  Gesture and classifier constructions Classifier constructions in sign languages show striking similarities with iconic or representational gestures, as also described in Woll & Vinson, Chapter 25, for which reason they are sometimes argued to be non-​linguistic or partly linguistic and partly gestural (e.g., Cogill-​Koez 2000; Liddell 2003; Johnston & Ferrara 2014). To investigate this argument, descriptions of spatial situations in silent gestures produced by non-​signers have been investigated (Schembri 2001; Schembri et al. 2005 in Australia; Janke & Marshall 2017 in England). The underlying assumptions of the studies in Australia and England are that linguistic structure requires conventionalization, and conventionalization is apparent from language-​specificity. Thus, similarities in structures between sign language and silent gestures are an indication for non-​conventionality, and thus for non-​linguisticity. To test this, Schembri et al. (2005) elicited spatial descriptions with a subset of vignettes from the VMP (described in Section 8.2.1) from 25 deaf fluent Auslan signers and 25 hearing Australian students with no knowledge of any sign language.4 The signers were asked to describe in Auslan what the objects in the clips were doing, and the hearing participants were instructed to do so without speaking, using only their hands, imagining they were communicating with a profoundly deaf person. All descriptions in this study were coded for handshape, movement, and location. In the responses where participants represented objects located in and moving through space with their hands, there was a high correspondence between the locations and movements produced by the sign-​naïve participants and the Auslan signers (mean 75% and 77%, respectively). However, the handshapes of the sign-​naïve participants matched those of the signers much less frequently (44%). This is attributed to the larger degree of conventionalization of (classifier) handshapes within Auslan. The large degree of overlap in the expression of spatial locations and motions in both groups suggests that these are non-​conventionalized, and part of a general gesture system. Janke & Marshall (2017) used the design in Marshall & Morgan (2015) with 30 British non-​signers, who were asked to describe the pictures in a silent condition, as in Schembri et  al. (2005). They focused on the hands, comparing the produced handshapes to the BSL target classifier handshapes. The participants responded with at least one ‘hand-​ as-​object’ in 94% of the trials. While the control group employed 16 different classifiers, the number of ‘hands-​as-​object’ of the sign-​naïve participants in this study was 53, of which nine are not part of the BSL handshape inventory. The participants sometimes 179

180

Inge Zwitserlood

molded their hands to represent more than one object, as in Figure  8.2a (right hand portraying a paper and a pencil) and Figure 8.2b (hand depicting glass and toothbrush), or to represent an object in a somewhat different way from the usual in BSL (Figure 8.2c). Like Schembri et al. (2005), Janke & Marshall conclude that the non-​signers were able to tap into a repertoire of gestural resources, in this case for object representation. (a)

(b)

(c)

Figure 8.2  Replications of stimuli and some of the produced non-​BSL handshapes (images after Janke & Marshall 2017)

The suggestion in both studies that the non-​ signers have taken recourse to the affordances that the human body and the space around it offer is perfectly plausible. The issue to what extent overlap in forms produced by signing and sign-​naïve participants is, indeed, indicative of their linguistic status, remains open. This will be further discussed in Section 8.6.

8.4  Psycholinguistic studies We now turn to psycholinguistic investigations of classifier constructions. An often-​ cited report concerns the gradient-​categorical properties of the components classifier and location in simple static spatial expressions in ASL (Emmorey & Herzig 2003). This study tested Supalla’s (1982) claim that all units in classifier constructions are discrete and meaningful, as well as Liddell’s (2003) counterclaim that some of these units, in particular the spatial locations, are analogue, non-​linguistic representations. To this end, the authors investigated (i) the interpretation and production of different classifier handshapes depicting different sizes of a flat round object, and (ii) the interpretation and

180

181

Classifiers: experimental perspectives

production of varying locations in classifier constructions. Participants were ASL signers (all having a good command of ASL, but not all native signers) and hearing non-​signers. In study (i), three experiments were conducted to investigate comprehension of object size expressions with classifier constructions. Participants in the first two of these experiments were ten ASL signers and ten non-​signers; the third experiment was carried out by ten (production) and six (interpretation) ASL signers. In the first experiment, all participants were shown video clips (in randomized order) of a native ASL signer producing handshapes representing differently-​sized flat round objects (medallions), in which differences in thumb and index finger configuration represented the different object sizes (see Figure 8.3a). For each clip, the participants were requested to select the most appropriate one from ten differently-​sized round stickers, to match the description in the video. The second experiment contained 100 trials in which each of the medallion sizes was shown together with one of the ten clips. The participants indicated how well handshape and medallion matched on a seven-​point scale. The third experiment was a repetition of the first experiment, with the difference that the handshapes were collected from ten ASL users who were naïve to the study. Each produced a non-​contrastive description of one medallion. Results of the first experiment reveal that the performance of the non-​signers did not show a correlation between sticker size and hand aperture, while the ASL signers were sensitive to the size differences in the handshape, apparently interpreting them as analogue and gradient. However, the signers’ and non-​signers’ acceptability judgments of hand aperture matching medallions size in the second experiment were categorical. ASL signers and non-​signers differed in that the signers’ acceptability ratings were significantly sharper. From the third experiment, it appears that the naïve signers used only three different handshapes to describe the ten medallions, i.e., a -​handshape (F), a -​handshape (‘baby C’), and a bimanual /​ -​handshape (C). The deaf ASL signers who interpreted these productions were not able to match medallion size with the produced classifiers. The conclusion, based mainly on the second and third experiment, is that the classifier is treated as categorical by signers (although they are able to recognize handshape manipulations expressing gradient size) but not by non-​signers, and can thus be considered a discrete, linguistic element. Study (ii) reported by Emmorey & Herzig tested interpretation of spatial loci in classifier constructions. First, ten deaf ASL signing participants (nine native signers) and ten non-​signing hearing participants were shown 30 ASL constructions in which a classifier representing a small round object was localized at varying locations with respect to a classifier for a horizontal long entity (see Figure 8.3b for a schematic illustration of the construction and the grid of all 30 locations), in random order. For each clip, they were requested to match the spatial description by putting a round sticker in relation to a horizontal bar on a response sheet. Second, deaf ASL signers and hearing non-​signers, ten in each group, were offered 100 spatial ASL descriptions (a subset of the clips from the previous experiment) paired with sheets with a dot in a spatial relation to a horizontal bar. Participants were asked to indicate how well-​matched each pair was on a seven-​point scale. Third, 30 deaf ASL signers were each requested to describe one out of the set of 30 pictures with varying spatial configurations of dot and horizontal bar (non-​contrastive). Their (randomized) descriptions were subsequently shown to six other deaf ASL signers, who performed the same sticker placement task as in the first experiment.

181

182

Inge Zwitserlood

The results of the first spatial loci experiment indicate that signers and non-​signers interpreted the locations as analogue to those in physical space, although the performance in both groups showed a bias towards placing the stickers away from the central vertical axis. In the second experiment, neither the ASL signers nor the non-​signers clustered near-​target locations with target locations. Finally, interpretation of the loci produced by the naïve signers in the third experiment appeared to be also analogue and gradient, although there was an even stronger category bias than in the first experiment. This was argued to be a repetition effect caused by two successive interpretations of visual-​spatial stimuli (first in the production task, then in the interpretation task). The fact that different locations were used or accepted for each stimulus item (all experiments) and that there was no difference in performance between signers and non-​ signers (experiments 2 and 3) was taken as evidence for a gradient, non-​linguistic nature of spatial loci. (a)

Handshape continuum describing round objects used in the stimuli of the size interpretation task (b)

Schematic view of the hand configuration (left) and location grid (right) used in the first experiment testing interpretation of spatial locations

Figure 8.3  Illustrations for the comprehension experiments (images after Emmorey & Herzig 2003)

The general conclusion of this study is that the handshapes in classifier constructions are linguistic elements, but that the locations in sign space are gradient and analogue, non-​linguistic representations. This conclusion, however, is not entirely justified, for several reasons. First, participants were quite free in their responses, e.g., in the sticker

182

183

Classifiers: experimental perspectives

placement test, or were to use a gradient matching scale, while categorical perception studies typically use forced-​choice tasks with a very limited set of possibilities (see the study by Sevcikova (2013) and Sevcikova Sehyr & Cormier (2015) below). Second, the third test in both studies shows that signers do not produce the range of gradually differing handshapes and locations that were used as stimuli in the first and second tests. Thus, it is not clear what these tests actually measure. Finally, although it is stated that locations are interpreted in an analogue, gradient way, all location tests show a tendency to cluster the locations, in particular the third test.5 Sevcikova (2013) and Sehyr & Cormier (2015) report on a categorical perception experiment of classifiers in handling constructions in BSL. Fourteen deaf BSL signers and 14 hearing non-​signers carried out forced-​choice identification tasks and ABX discrimination tasks on two handshape continua (flat-​ to flat-​ , and closed-​ to open-​ ), each containing 11 handshapes with apertures equally incremented from closed to fully opened. The handshapes were presented in short animations, and reaction times were measured with each task. In order to prime for handling (rather than mere hand aperture), prior to the two tasks, participants saw images of persons moving flattish rectangular or cylindrical objects from a shelf, in which the hands were not clearly visible. In the identification tasks, the participants were first shown the handshapes of the continuum ends, followed by 22 clips of the full range of 11 handshapes in randomized order. They were requested to indicate which of the endpoints the handshape matched best for each clip. The discrimination task, with the same stimuli in 3×12 trials for each continuum, involved viewing clips with handshapes of each continuum in triads, the first two always with a distance of two steps on the continuum, the third one always the same as either the first or the second handshape in the triad. The participants were asked to indicate whether the third handshape matched the first or the second handshape they had just seen. Both groups performed similarly for both continua. Deaf signers and hearing non-​ signers made the same binary categorizations of the handshapes in each continuum in the identification task. The reaction times did not differ overall, and for both groups, handshape categorization on the boundary was significantly slower than categorization within category or on the endpoints. The results of the discrimination task show that both groups placed similar perceptual boundaries and most accurately matched pairs that straddle the boundaries. Reaction times of the signers were fastest at the closed ends of the continua, and their performance slowed down progressively towards the fully open ends. There were no such differences in the reaction times of the non-​signers. The overall conclusion is that deaf BSL signers and sign-​naïve participants perceive the handshapes on the continua categorically. Unfortunately, this study, too, has the drawback that the handshape continua do not reflect the way in which BSL signers would normally represent the objects (Sevcikova 2013).

8.5  Neurolinguistic studies While the processing of human language mainly takes place in the left hemisphere, some functions of language use are located in the right hemisphere, such as discourse abilities and spatial processing. Poizner et al. (1987) were the first to notice that signers with right

183

184

Inge Zwitserlood

hemisphere damage did not have problems with linguistic tests, except for comprehension of spatial syntax. To further investigate this, the comprehension and production of classifier constructions and lexical signs have been compared between healthy signers and signers with brain damage, and brain activity in processing these elements has been measured. Relevant studies are summarized in Sections 8.5.1 and 8.5.2.

8.5.1  Studies with brain-​damaged participants Atkinson et al. (2005) carried out two experiments on the comprehension of static spatial expressions in BSL with 15 fluent, brain-​damaged signers (six right-​lesioned and nine left-​lesioned) and 15 (experiment 1)  and 18 (experiment 2)  fluent healthy signers. In experiment 1, participants watched two sets of 30 randomized video clips of spatial expressions in BSL. One set contained sentences with a preposition, the other sentences with a classifier construction. For each sentence, participants were asked to identify the matching picture out of a set of four. Examples of the sentence types and picture set are provided in Figure 8.4. (a)

PAPER

PEN

ON

PAPER

PEN + flat object

thin object on flat object

(b)

(c)

Figure 8.4  Replications of stimuli used in the spatial comprehension task (images after Atkinson et al. 2005): a sentence with a BSL preposition (a), a sentence with a classifier construction (b), and a picture set from which participants were to select the matching spatial setting (c)

Results reveal a much poorer performance by the lesioned signers, with no difference in overall score between left and right lesion. Participants with left hemisphere damage had more trouble comprehending sentences with prepositions than sentences with

184

185

Classifiers: experimental perspectives

classifier constructions. This was expected since prepositions are lexical signs and are thus processed in the left hemisphere, and it is reported that some hearing aphasic participants also have difficulty comprehending prepositions. The right-​lesioned participants had more difficulty interpreting sentences describing reversible configurations than non-​ reversible configurations, in both sentence types. Although this difficulty is attributed to reduced capability in processing topographic space, which is needed to understand the arrangement of the objects, the researchers note that this does not explain why this group performed similarly in the comprehension of sentences with prepositions and classifier constructions, the latter typically using topographic space. Experiment 2 investigated the interpretation of the handshape, orientation, and location of classifiers. All participants were offered 34 classifier constructions in isolation, and for each construction, they were to select the matching picture out of four. Based on the selected pictures, the interpretation of the handshape, the orientation, and the spatial location were analyzed. Again, the controls outperformed both lesioned groups, and the performance of the left-​and right-​lesioned participants was similar. This is considered further evidence that right hemisphere damaged signers are impeded in interpreting classifier constructions expressing spatial information and positions of objects, but, again, there is no explanation for the poorer performance of the left-​ lesioned participants unless we assume that processing spatial information also engages the left hemisphere. Hickok et  al. (2009) carried out a production study, hypothesizing that production of classifier constructions involves activity in the right hemisphere, in contrast to lexical signs, which show a strong dominance of the left hemisphere. This was tested by elicitation of a narrative from 21 fluent deaf ASL signers with unilateral left-​sided (13) or right-​sided (8)  brain damage, and comparing the produced lexical signs and classifier constructions. Overall, the results show that handshape errors were proportionally more frequent in the classifier constructions produced by the lesioned signers than in their lexical signs. While there was no significant error difference in the classifier constructions and lexical signs produced by the left-​lesioned signers, the right-​lesioned signers’ error percentages for classifier constructions far exceeded those of the left-​lesioned signers and in particular their own lexical errors. This suggests that, contra expectation, production of classifier constructions engages both hemispheres. Still, a dominance is found in the right hemisphere.

8.5.2  Brain imaging studies Recently developed technology, such as fMRI and PET scans, makes it possible to directly measure brain activity rather than inferring which brain regions are engaged in particular tasks from production and comprehension errors. MacSweeney et al. (2002) performed a comprehension study using fMRI technology with 17 right-​ handed native BSL signers (deaf and hearing). The participants watched five blocks of five BSL sentences expressing topographic spatial relations and five blocks of five BSL sentences without spatial expressions. They were asked to push a button when they identified a sentence that described an anomalous situation (each block contained one). Furthermore, eight hearing non-​signers performed a similar task, with comparable English sentences presented visually and auditorily. The deaf signers performed more accurately (91% correct) than the hearing signers (75% correct on topographic 185

186

Inge Zwitserlood

and 85% correct on non-​topographic sentences). Processing of the topographic BSL sentences appeared to result in more activation bilaterally in the occipito-​temporal regions and the left inferior and superior parietal regions than processing of non-​ topographic sentences. Such a contrast was not observed for the processing of the English sentences. Again, we find unexpected left hemisphere engagement in classifier constructions. The authors suggest that the comprehension of direct mapping of spatial information in simple classifier constructions particularly involves left hemisphere regions, while the production of classifier constructions engages the right hemisphere to a greater extent. Emmorey et  al. (2002) also investigated which brain regions are involved in the production of spatial expressions in ASL, using PET and MR scans. Ten deaf native ASL signers were asked (i)  to produce bimanual classifier constructions on the basis of line drawings of two objects in a spatial relation; (ii) to produce ASL prepositions on the basis of similar line drawings; and (iii) to label objects figuring in the same line drawings with ASL signs. While performing these tasks, the participants’ brain activity in tasks (i)  and (ii) were measured and compared. Furthermore, it was compared to the brain activity of native English speakers expressing the same spatial relations in English elicited in a previous study by Damasio et al. (2001). The results show that, first, spatial expressions with a classifier construction activated areas in the left hemisphere, similar to object labeling and to spatial expression with an English preposition by hearing people, but they also showed right hemisphere engagement. The latter, expected result is explained as being due to the expression of topographical information in classifier predicates. Second, expression of spatial relations by means of ASL prepositions exhibited unexpected activation in the right hemisphere. The authors do not have a clear explanation for this observation and suggest that it may be related to the marginal use of spatial prepositions in ASL. Taken together, the neurolinguistic studies all meet with activation of unexpected brain areas in the left hemisphere in the processing of classifier constructions. This may be due to the fact that in these constructions, spatial and object information is simultaneously present. In order to tease these types of information apart, another PET study was performed by Emmorey et al. (2013). Eleven Deaf native ASL signers produced: (i) static spatial expressions based on 2 × 25 line drawings of a Figure and a Ground object that only differed in the location of the Figure relative to the Ground (see Figure 8.5a); (ii) dynamic spatial expressions for 2 × 25 line drawings of one and the same Figure object moving along different trajectories with respect to a Ground object (Figure 8.5b); (iii) static spatial expressions elicited by 50 line drawings with a Figure and a Ground in which the Figure object differed (see Figure 8.5c); (iv) lexical signs for the 50 objects in the previous task; and (v) dynamic spatial expressions for 2 × 25 line drawings in which different Figure objects moved away from a Ground object (see Figure  8.5d). The participants were instructed to use bimanual classifier constructions in all tasks except (iv). It was expected that manipulation of the classifier handshape would show more activation in the left hemisphere, and manipulation of locations and motions would result in more activation in the right hemisphere.

186

187

Classifiers: experimental perspectives (a)

(b)

(c)

(d)

Figure 8.5  Replications of examples of the stimuli (images after Emmorey et al. 2013) eliciting static (a,c) and dynamic (b,d) spatial relations between a Ground object (table) and a variety of Figure objects (c,d), as well as labels for the various Figure objects

Comparison of classifier constructions that only varied in the spatial locations with constructions that only varied in the Figure object classifier showed a greater bilateral activation in the superior parietal and occipital cortices for the former, and an increased activation in the left inferior frontal cortex and the left posterior IT cortex for the latter, 187

188

Inge Zwitserlood

as expected. Static locative and dynamic movement classifier constructions did not differ significantly, although locative constructions showed a slightly larger activation in the left hemisphere. Production of lexical signs activated the left fronto-​temporal cortices more than classifier constructions expressing static and dynamic spatial relations, also as expected. Moreover, they resulted in activation in the anterior temporal lobes and cerebellum. The classifier constructions, on the other hand, caused more activation in the parietal cortices. Classifier constructions with different classifiers and lexical signs activated the left inferior frontal gyrus equally, and more engagement of the right anterior temporal cortex was found for lexical signs. Static classifier constructions for different objects, furthermore, showed more activation in the left hemisphere (precentral sulcus) and bilaterally in the parietal cortices than lexical signs. Finally, production of classifier constructions for dynamic spatial expressions with the same or different Figure objects did not evoke significantly diverging activations, although expressions with different Figure objects showed more engagement in several regions in the left frontal cortex, left fusiform gyrus, and left para hippocampal gyrus. When brain activity in the production of all classifier constructions was compared with that of lexical signs, the former showed more activation bilaterally in the superior parietal and occipital cortices, while the latter exhibited more bilateral activity in the anterior temporal lobes. Thus, the production of lexical signs and classifiers appears to engage the left inferior frontal gyrus more than the production of spatial information, which is an indication that production of classifiers, too, involves lexical retrieval. In contrast, spatial locations and motions in these constructions activate the right superior parietal lobules more than lexical signs and classifiers. This explains the unexpected result in the previously mentioned studies that classifier constructions activate the left hemisphere as well as the right one. The question why both hemispheres are involved in processing prepositions remains to be investigated. The studies discussed here meticulously investigate the brain regions active in processing of different aspects of spatial constructions, thus giving more insight into what had already been covered by neurological investigations of the processing of spatial elements in spoken languages. Nonetheless, caution is needed in generalizing these results. As in all studies of this kind, since non-​controlled elements are carefully excluded as much as possible, we do not know to what extent the signed stimuli or constructions produced out of context reflect what happens in normal language use.

8.6  Discussion The different viewpoints from which classifier constructions have been studied and the various methods involved unambiguously show a dichotomy between classifiers (handshapes) and spatial information in classifier constructions. Results indicate: (i) that classifiers are more difficult to acquire than spatial loci, movements, and orientations, (ii) that the handshapes in the productions of signers and gesturers are less similar than the spatial information, (iii) that classifiers are interpreted categorically while spatial loci are produced and interpreted as analogue, gradient forms, and (iv) that the processing of classifiers involves activation in brain areas different from those involved in the processing of spatial loci. On the basis of the results, these studies strongly suggest or even claim that classifiers are linguistic elements. Spatial loci and movements, on the other hand, as non-​discrete and non-​listable, analogue elements, seem to be part of the general human spatial cognition rather than of a linguistic system. As such, these 188

189

Classifiers: experimental perspectives

findings contradict previous analyses of location and movement as morphemic in classifier constructions. Although these results seem straightforward, there are concerns connected with the studies that have already been touched upon briefly in the previous sections, and the results should, thus, be carefully interpreted. First, not all studies clearly define what they consider a classifier or a classifier construction, in particular in comparison to other (lexical) signs expressing motion and object shape, or signs in which the handshape represents an entity. Even where a definition is provided, examples, stimuli, or stimulus descriptions in some reports suggest that unintentionally lexical signs have been included among the classifier constructions, which may have influenced the results (e.g., MacSweeney et al. 2002; Atkinson et al. 2005; Bernardino 2006). Second and related to this, it is very difficult to define errors (and categories) in spatial locations and movements, as was also indicated by De Beuzeville (2006: 206). Since information about the way in which these are scored is not always provided, it is unclear what constitutes an error, deviation, or similarity in spatial information in produced (classifier) signs and in gestures (e.g., Schembri et al. 2005; Hickok et al. 2009; Marshall & Morgan 2015). What is more, while the high scores of sign language learners and non-​signers on comprehension and production of spatial elements are explained as a result of analogy, if this is indeed the case, the question arises why they make errors at all. Additionally, the outcome of the various experiments investigating spatial locations is not unambiguously that they are analogue and gradient. In fact, even in the tests in which participants have much freedom in selecting spatial locations (Emmorey & Herzig 2003), these tend to be clustered. These are issues that need to be addressed in future studies. A third issue is that, as a consequence of the necessity to eradicate undesired variability effects, the sentences and classifier constructions in the stimuli and the participants’ productions are often produced and interpreted out of context. Participants are sometimes even instructed to use particular constructions (e.g., Emmorey et al. 2002, 2013). It is therefore undecided whether the results are representative of natural language use, or whether they reflect the possibility for signers and non-​signers to interpret and perform manipulations of otherwise linguistic elements, just like it is possible for hearing people to use length of a (speech) sound to convey variation in action or object size. Without evidence for gradience in spatial expressions in everyday language use, these studies are merely indicative of the possibilities that the medium offers and of the extent to which humans are able to play with these possibilities. Fourth, the assumption that discreteness is a necessary or a sufficient condition for linguistic status has long proven to be false. As was also stated by Sevcikova Sehyr & Cormier (2015), non-​speech sounds and other phenomena have been shown to be perceived categorically, color perception being the best-​known example (Berlin & Kay 1969). Moreover, some speech sounds (in particular vowel properties) are not perceived categorically, while others are perceived categorically by users as well as non-​users of a language, the latter even including non-​humans (see Hary & Massaro (1982); Schouten et al. (2003); Lotto et al. (2009), for discussion). Therefore, any conclusion that locations or movements in classifier constructions are not linguistic because they are not discrete is problematic. Finally, the observed high incidence of similarities in spatial constructions produced by signers and sign-​naïve persons, which would be unexpected in those of users and non-​ users of a spoken language, is suggested to be related to the general human substrate of visuo-​spatial knowledge and gestural skills (e.g., Schembri et  al. 2005; Marshall & 189

190

Inge Zwitserlood

Morgan 2015; Janke & Marshall 2017). Spatial expression in spoken languages, in contrast, is conventionalized to a high degree and can only be accessed with linguistic knowledge. This implies that conventionalization inevitably involves loss of transparency, to the degree of arbitrariness, and that we can only speak of conventional linguistic items when they are not transparent. However, this seems to be taking the point too far, and we may want to restate some of our notions, such as conventionalization. After all, it is generally acknowledged that many signs in sign language lexicons, although iconic, are conventionalized. It is important that complex issues like classifier constructions are investigated by means of a variety of differently focused studies, as each investigation has its drawbacks and biases. The experimental studies summarized above already provide interesting information about the acquisition of spatial constructs and signers’ and non-​signers’ perception and production of manual constructions in which various meaningful elements are simultaneously present. Still, we have to conclude that inferences about the linguistic status of the investigated elements are premature. This all leads us to the basic question of what makes a communication system a language. The study of sign language opens new windows to our understanding of the nature of language. Indeed, previously proposed design features of natural language that were based entirely on spoken language (e.g., Hockett 1960) turned out to be too narrow. For example, the principles of linearity and arbitrariness appear, for a large part, to be due to the one-​dimensional nature of spoken language on the one hand (e.g., Ladd 2014), which disfavors simultaneous expression of linguistic elements, and to their limited propensity to map real world experiences in an analogue way onto linguistic form on the other hand (e.g., Taub 2001). Moreover, linguistics has, for a long time, focused on written text, excluding non-​categorical vocal and non-​vocal parts of the speech signal (e.g., prosody, intonation, facial expression, and gesture) from the field of linguistics. Recent studies of these phenomena, as well as observations that iconicity is more prevalent in spoken languages than had been previously assumed (see, e.g., Perniss et al. (2010)), however, indicate that the field of linguistics needs to be extended to include such aspects. Therefore, it is necessary to reconsider whether analogue forms and gradience in sign languages, in the case of use of spatial motion, location, and orientation, should be inside our definition of language.

8.7  Summary and conclusion This chapter provides an overview of a range of experimental studies addressing the nature of classifier constructions in sign languages. First and second language acquisition studies show that the classifiers are the most difficult elements to acquire, contrary to the spatial information; the classifiers are much more prone to error than lexical signs in the production of sign language learners and mastery takes a longer time. Comparison of classifier constructions produced by signers with representational gestures elicited from non-​signers shows that the expression of spatial information is quite similar, but that there are considerable differences in the selected handshapes between both groups. Perception of handshape and location in classifier constructions appears to be similar in signers and non-​signers:  the classifiers studied tended to be perceived as categorical but this was less so for spatial locations. Finally, neurolinguistic studies reveal that lexical signs are processed in the left hemisphere while classifier constructions engage both hemispheres. In particular, spatial information is responsible for right hemisphere 190

191

Classifiers: experimental perspectives

activity. All these results have been interpreted as indications that classifiers are linguistic, while spatial information is non-​linguistic. Despite the uniformity of the results, I have argued that such a conclusion is precipitous, even if inaccuracies in the research designs are accounted for. The case rests on statements about the nature of human language that are, or should be, under debate, since they have been based on a substratum of spoken language containing discrete, categorical elements. Some of the assumptions about language, in particular the one concerning the arbitrariness of the sign, have been refuted because sign language research has contributed to the current knowledge that languages allow for iconicity at various levels. The study of sign languages may similarly broaden our understanding about the previously assumed substantial role of discreteness and categoricity in the linguistic system.

Notes 1 For other studies, see Kantor (1980), Supalla (1982), Schick (1990a, b), Slobin et al. (2003), Fish et al. (2003), Martin & Sera (2006), Morgan et al. (2008), and Brentari et al. (2013). 2 De Beuzeville (2006) defines appropriate handshapes as handshapes that (i) are produced by the controls for the same items, (ii) are used by the controls for the same object in other test items, (iii) are also acceptable by adult native signers, (iv) occur in the lexical sign for the object, or (v) belong to a different classifier type (e.g., an accepted handling classifier for a particular object instead of the target entity classifier). 3 This is related to the facts that non-​expression of a Ground classifier is considered as location omission, and that some of the handling constructions involved person agreement rather than spatial expression. 4 Excluded here are the comparisons in this study with productions of adult deaf fluent signers of Taiwan Sign Language (TSL), (assumed) target structures in ASL, and results of similarly designed studies with native and late learners of ASL, adult ASL signers, American sign-​naïve adults, and an American deaf homesigner studied by Singleton & Morford (1993) and Singleton & Newport (2004). 5 A hierarchical cluster analysis performed (with thanks to Toni Rietveld) on the mean results of all three tests shows that the signers cluster the locations in four areas, and the non-​signers in five areas.

References Atkinson, Jo, Jane Marshall, Bencie Woll, & Alice Thacker. 2005. Testing comprehension abilities in users of British Sign Language following CVA. Brain and Language 94. 233–​248. Berlin, Brent & Paul Kay. 1969. Basic color terms:  Their universality and evolution. Berkeley, CA: University of California Press. Bernardino, Elidea L.A. 2006. What do deaf children do when classifiers are not available? The acquisition of classifiers in verbs of motion and verbs of location in Brazilian Sign Language (LSB). Boston, MA: Boston University PhD dissertation. Bernardino, Elidea L.A. & Robert Hoffmeister. 2004. The use of classifiers in verbs of motion and verbs of location in Brazilian Sign Language –​production and evaluation. Boston, MA: Boston University Manuscript. Bernardino, Elidea L.A., Robert Hoffmeister, & Shanley Allen. 2004. The use of classifiers in verbs of motion and verbs of location in Brazilian Sign Language. Boston, MA:  Boston University Manuscript. Boers-​Visker, Eveline & Beppie van den Bogaerde. 2019. Learning to use space in the L2 acquisition of a signed language. Two case studies. Sign Language Studies 19(3). 410–​452. Brentari, Diane, Marie Coppola, Ashley Jung, & Susan Goldin-​Meadow. 2013. Acquiring word class distinctions in American Sign Language: Evidence from handshape. Language Learning and Development 9. 130–​150.

191

192

Inge Zwitserlood Carpenter, Katie. 1991. Later rather than sooner: Extralinguistic categories in the acquisition of Thai classifiers. Journal of Child Language 18. 93–​113. Carpenter, Katie. 1992. Two dynamic views of classifier systems: Diachronic change and individual development. Cognitive Linguistics 3(2). 129–​150. Cogill-​Koez, Dorothea. 2000. Signed language classifier predicates: Linguistic structures or schematic visual representation? Sign Language & Linguistics 3(2). 209–​236. Damasio, Hanna, Thomas J. Grabowski, Daniel Tranel, Laura L.B. Ponto, Richard D. Hichwa, & Antonio R. Damasio. 2001. Neural correlates of naming actions and of naming spatial relations. NeuroImage 13. 1053–​1064. De Beuzeville, Louise. 2006. Visual and linguistic representation in the acquisition of depicting verbs: A study of native signing deaf children of Auslan (Australian Sign Language). Newcastle, Australia: University of Newcastle PhD dissertation. Emmorey, Karen, Hanna Damasio, Stephen McCullough, Thomas J. Grabowski, Laura L.B. Ponto, Richard D. Hichwa, & Ursula Bellugi. 2002. Neural systems underlying spatial language in American Sign Language. NeuroImage 17. 812–​824. Emmorey, Karen & Melissa Herzig. 2003. Categorical versus gradient properties of classifier constructions in ASL. In Karen Emmorey (ed.), Perspectives on classifier constructions in sign languages, 221–​246. Mahwah, NJ: Lawrence Erlbaum. Emmorey, Karen, Stephen McCullough, Sonya Mehta, Laura L.B. Ponto, & Thomas J. Grabowski. 2013. The biology of linguistic expression impacts neural correlates for spatial language. Journal of Cognitive Neuroscience 25(4). 517–​533. Ferrara, Linday & Anna-​Lena Nilsson. 2017. Describing spatial layouts as an L2M2 signed language learner. Sign Language & Linguistics 20(1). 1–​26. Fish, Sarah, Bruce Moren, Robert Hoffmeister, & Brenda Schick. 2003. The acquisition of classifier phonology in ASL by deaf children: Evidence from descriptions of objects in specific spatial arrangements. In Barbara Beachley et  al. (eds.), Proceedings of the Annual Boston University Conference on Language Development 27(1). 252–​263. Gandour, Jack, Soranee Holasuit Petty, Rochana Dardarananda, Sumalee Dechongkit, & Sunee Mukngoen. 1984. The acquisition of numeral classifiers in Thai. Linguistics 22. 455–​479. Hary, Joseph M. & Dominic W. Massaro. 1982. Categorical results do not imply categorical perception. Perception & Psychophysics 32(5). 409–​418. Hickok, Gregory, Herbert Pickell, Edward S. Klima, & Ursula Bellugi. 2008. Neural dissociation in the production of lexical versus classifier signs in ASL: Distinct patterns of hemispheric asymmetry. Neuropsychologia 47. 382–​387. Hockett, Charles F. 1960. The origin of speech. Scientific American 203. 89–​97. Janke, Vikki & Chloë Marshall. 2017. Using the hands to represent objects in space: Gesture as a substrate for signed language acquisition. Frontiers in Psychology 8. 1–​13. Kantor, Rebecca. 1980. The acquisition of classifiers in American Sign Language. Sign Language Studies 28. 193–​208. Johnston, Trevor & Lindsay Ferrara. 2014. Lexicalization in signed languages: When is an idiom not an idiom? Selected Papers from UK-​CLA Meetings 1. 229–​248. Ladd, Robert. 2014. Simultaneous structure in phonology. Oxford: Oxford University Press. Liddell, Scott K. 2003. Sources of meaning in ASL classifier predicates. In Karen Emmorey (ed.), Perspectives on classifier constructions in sign languages, 199–​220. Mahwah, NJ:  Lawrence Erlbaum. Lotto, Andrew J., Gregory S. Hickok, & Lori L. Holt. 2009. Reflections on mirror neurons and speech perception. Trends in Cognitive Sciences 13(3). 110–​114. MacSweeney, Mairéad, Bencie Woll, Ruth Campbell, Gemma A. Calvert, Philip K. McGuire, Anthony S. David, Andrew Simmons, & Michael J. Brammer. 2002. Neural correlates of British Sign Language comprehension: Spatial processing demands of topographic language. Journal of Cognitive Neuroscience 14(7). 1064–​1075. Marshall, Chloë R. & Gary Morgan. 2015. From gesture to sign language: Conventionalization of classifier constructions by adult hearing learners of British Sign Language. Topics in Cognitive Science 7. 61–​80. Martin, Amber Joy & Maria D. Sera. 2006. The acquisition of spatial constructions in American Sign Language and English. Journal of Deaf Studies and Deaf Education 11(4). 391–​402.

192

193

Classifiers: experimental perspectives Morgan, Gary, Rosalind Herman, Isabelle Barriere, & Bencie Woll. 2008. The onset and mastery of spatial language in children acquiring British Sign Language. Cognitive Development 23. 1–​19. Perniss, Pamela, Robin L. Thompson, & Gabriella Vigliocco. 2010. Iconicity as a general property of language: Evidence from spoken and signed languages. Frontiers in Psychology 1:227. 1–​15. Poizner, Howard, Edward S. Klima, & Ursula Bellugi. 1987. What the hands reveal about the brain. Cambridge MA: MIT Press. Schembri, Adam. 2001. Issues in the analysis of polycomponential verbs in Australian Sign Language (Auslan). Sydney: University of Sydney PhD dissertation. Schembri, Adam, Caroline Jones, & Denis Burnham. 2005. Comparing action gestures and classifier verbs of motion: Evidence from Australian Sign Language, Taiwan Sign Language, and nonsigners’ gestures without speech. Journal of Deaf Studies and Deaf Education 10(3). 272–​290. Schick, Brenda S. 1987. The acquisition of classifier predicates in American Sign Language. West Lafayette, IN: Purdue University PhD dissertation. Schick, Brenda S. 1990a. Classifier predicates in American Sign Language. International Journal of Sign Linguistics 1. 15–​40. Schick, Brenda S. 1990b. The effects of morphosyntactic structure on the acquisition of classifier predicates in ASL. In Ceil Lucas (ed.), Sign language research. Theoretical issues, 358–​374. Washington, DC: Gallaudet University Press. Schouten, Bert, Ellen Gerrits, & Arjan Van Hessen. 2003. The end of categorical perception as we know it. Speech Communication 41. 71–​80. Senft, Gunther. 1996. Classificatory particles in Kilivilla. New York: Oxford University Press. Sevcikova, Zed. 2013. Categorical versus gradient properties of handling handshapes in British Sign Language (BSL): Evidence from handling handshape perception and production by deaf signers and hearing speakers. London: University College PhD dissertation. Sevcikova Sehyr, Zed & Kearsy Cormier. 2015. Perceptual categorization of handling handshapes in British Sign Language. Language and Cognition 8(4). 501–​532. Singleton, Jenny L. & Jill P. Morford. 1993. Once is not enough: Standards of wellformedness in manual communication created over three different timespans. Language 69(4). 683–​715. Singleton, Jenny L. & Elissa L. Newport. 2004. When learners surpass their models: The acquisition of American Sign Language from inconsistent input. Cognitive Psychology 49(4). 370–​407. Slobin, Dan I., Nini Hoiting, Marlon Kuntze, Reyna Lindert, Amy Weinberg, Jennie Pyers, Michelle Anthony, Yael Biederman, & Helen Thumann. 2003. A cognitive/​functional perspective on the acquisition of “classifiers”. In Karen Emmorey (ed.), Perspectives on classifier constructions in sign languages, 271–​296. Mahwah, NJ: Lawrence Erlbaum. Supalla, Ted R. 1982. Structure and acquisition of verbs of motion and location in American Sign Language. San Diego, CA: University of San Diego PhD dissertation. Supalla, Ted R., Elissa Newport, Jenny Singleton, Sam Supalla, Doris Metlay, & Geoffrey Coulter. Unpublished. The test battery for American Sign Language morphology and syntax. Manuscript and videotape materials, New York: University of Rochester. Tang, Gladys, Felix Sze, & Scholastica Lam. 2007. Acquisition of simultaneous constructions by deaf children of Hong Kong Sign Language. In Myriam Vermeerbergen, Lorraine Leeson, & Onno Crasborn (eds.), Simultaneity in signed languages. Form and function, 283–​316. Amsterdam: John Benjamins. Taub, Sarah F. 2001. Language from the body: Iconicity and metaphor in American Sign Language. Cambridge: Cambridge University Press.

193

194

9 ASPECT Theoretical and experimental perspectives Evie A. Malaia & Marina Milković

9.1  Theoretical foundations of aspect The variety of the world’s languages, including sign languages, has resulted in the need for multiple ways to describe the temporal structure of sentences: tense and aspect. Tense is the easier one to understand: it describes the temporal location of an eventuality in relation to some other point in time (often, another eventuality, which might both be referenced in one sentence). Aspect, on the other hand, refers to the way in which the eventuality itself unfolds in time –​how long it takes, whether it is dynamic or static, etc. –​ from the point of view of the speaker. Linguistic means to encode aspect can vary across languages –​the same abstract concept might be denoted by grammatical means in one language, and lexical in another; sign languages use both manual and non-​manual means to express aspect. This chapter is concerned with a variety of means of encoding aspect across sign languages, and the ways to precisely express aspectual meaning in relation to the verb’s event structure using spatial means. We will start with lexical aspect –​i.e., temporal properties of an event that are implied as part of the verb’s lexical entry. We will then consider systems for denoting grammatical aspect in the languages where these exist, and the relationships between event structure and potential means of aspectual modification, as revealed by empirical research across a variety of sign languages.

9.1.1  Lexical aspect (Aktionsart/​event structure) Lexical aspect (also known as Aktionsart, or event structure) describes inherent temporal properties of an eventuality denoted by the verb. Several systems of describing these temporal properties exist. For example, Vendler (1969) identified four types of lexical aspect according to an event’s durativity (necessary occurrence over a period of time), and presence of identifiable temporal reference (start-​or end-​state). The verbs containing a temporal reference to change of state (inchoative verbs) were subdivided into accomplishments (those which included a durative component, e.g., develop) and achievements (verbs of instantaneous change, e.g., break). Verbs without a temporal reference were subdivided into durative actions (e.g., walk) and states (e.g., know), which do not require duration. Analyses of event structure also often include semelfactives 194

195

Aspect

(punctual events that often occur in sets of multiples, e.g., knock or blink); for conflicting analyses, see Smith (1997) and Rothstein (2004). The structure of an inchoative event, as introduced above, is typically assumed to have a temporal reference to the time of change (onset, an end-​point, or a punctual change). The events that do have such a time-​point as part of their semantics are termed telic (from Greek telos ‘goal’); those that do not are called atelic. The term event structure has also been applied to the analysis of full predicates, including arguments, rather than single lexical verbs (cf. English ‘to eat fish’ –​an atelic predicate with a non-​delimited argument, vs. ‘eat the fish’ and ‘eat up’ –​telic predicates with a delimited argument). The problem of limitation (the fact that the semantics of limitation can be conveyed both by the verb itself, as well as by the verb’s arguments) resulted in introduction of the notion of ‘measuring out’ of events, by way of using an additional argument and/​or action quantification, and in using compositional semantics (Krifka 1992; Jackendoff 1996) to analyze argument-​or adjective-​based event modifiers. Temporal and measure-​based approaches to defining event structure are not mutually contradictory; the difference lies in viewing the event as defined by the lexical verb only, or by the entire verb phrase. In feature-​based descriptions of event structure, verbs of different event types have been analyzed for presence of the features [+/​−​dynamic], [+/​−​durative], and [+/​−telic] (resultative) (Smith 1997). Using this system, Rathmann (2005) provides the following classification of American Sign Language (ASL) verb signs: states are [-​dynamic, +durative, −​telic], activities are [+dynamic, +durative, −​telic], semelfactives are [+dynamic, −​durative, −​telic], accomplishments are [+dynamic, −​durative, +telic], and achievements are [+dynamic, +durative, +telic]. Finally, Ramchand’s (2008) nano-​syntax model for cross-​linguistic analysis of event (and argument) structure must be mentioned, which subsumes both feature-​based and syntactic accounts. Figure 9.1 presents an extended version of this formal model. The three phrases (assumed to be merged within the lexical entry for the verb, but inferable from the verb’s semantic and syntactic behavior) represent all features used by previous models of event structure with regard to temporal and argument-​related properties of verbs. Presence of Initiation Phrase in the verbal entry allows for the representation of [+dynamicity] and an Agent argument; Undergoer Phrase can represent the [+duration] feature of the verb, as well as an Undergoer or a Theme argument; presence of the Result Phrase renders the verb [+telic], and allows for the third argument (a recipient/​ location) in ditransitive verbs (e.g., Bob gave Mary the book). Since the Merge operation is verb-​internal, the same argument can occupy multiple positions in the structure (e.g., in verbs that describe psychological states, such as amuse, appeal, which conflate the Undergoer and the Recipient); the model provides maximal explanatory power for cases such as multiple classes of verbs of psychological state, or systemic combinability of noun Case with verbal event structure. The model also allows to predict the behavior of verb classes that are not well-​described by feature-​based accounts1 or taxonomies, such as verbs of psychological state (know, decide, amuse, marvel), verbs of gradual change (e.g., melt), or predicates in sign language (see Wilbur & Malaia 2008b; Malaia & Wilbur 2010; Malaia & Wilbur 2012). For example, Grose at al. (2007) demonstrate that this system can capture the difference in syntactic behaviors between handling and instrument classifiers in ASL, which are a challenge for other accounts (Grose et  al. 2007: 1282).

195

196

Evie A. Malaia & Marina Milković

Figure 9.1  Full representational model of possible event structure according to Ramchand (2008)

Components of event structure are identifiable by means of semantico-​syntactic tests. Because event structure interacts with the syntax of each language, there can be variations as to the implementation of the tests, and some might not work in a given language. Here, we present parallel examples from English and ASL (from Rathmann 2005) to illustrate the process of adapting the tests to sign languages. 1. Combinability with verbs of perception. This test is used to distinguish permanent states (individual-​level predicates) from temporary, or stage-​level states, as the former do not combine with perception verbs: (1) a. # I SEE JOH N K N OW H IS TORY (Intended: ‘I see John knowing history.’)    b. I SEE JOH N BE-​S ICK ‘I saw John sick.’   

(ASL, Rathmann 2005: 67) (ASL, Rathmann 2005: 72)

2. Combinability with modifiers of duration (either temporal adverbials, or continuative morphemes), or manner/​intensity adverbials and/​or non-​manuals (CAREFULLY, SLOW LY) . This test is used to check for the [+durative] feature (absent in states). (2) a. # HIS TORY I K N OW +continuative (Intended: ‘History, I knew continuously.’)    b. J OHN BE-​S ICK +continuative ‘John was sick continuously.’   

(ASL, Rathmann 2005: 68) (ASL, Rathmann 2005: 72)

3. Combinability with imperatives as test for agentivity. This test distinguishes between states and activities, as states do not combine with imperatives (for ASL, Rathmann (2005) uses imperatives such as G O-​A H EAD, DO YOU MIND, I ORDER YOU ):

196

197

Aspect

(3)

a. * GO-​A H EAD BE-​S ICK ‘Go-​ahead and be sick!’    b. GO- ​A H EAD EXPLAIN H IS TORY IX i [MY SON] ‘Go ahead, explain history to my son.’   

(ASL, Rathmann 2005: 72) (ASL, Rathmann 2005: 75)

4. Combinability with verbs of termination vs. verbs of completion. Achievements [-​durative, +telic] are incompatible with verbs of termination, such as ‘stop’ (cf. (4a) vs. (4b)); states and (some) [-​telic] activities are incompatible with verbs of completion (cf. (4c) vs. (4d)). (4)

a. * I stopped arriving. b. I stopped driving. c. * I finished knowing. d. I finished writing.

In some languages, such as English, the interpretation of the test is made more difficult by coercion (Smith 1997), or typecasting. As a result of this operation, activities can combine with ‘finish’, but only when they are typecast as achievements (e.g., ‘I finished running’ assumes a pre-​determined time or distance for the event of running). Rathmann (2005) provides ASL examples of coercion of activities into achievements, where the addition of a measure to the activity (e.g., drinking water out of a glass until the glass is empty) effectively turns an activity, which cannot combine with temporal adverbials (5a), into a resultative accomplishment, which can (5b). In (5b), adding a [+hold] morpheme provides the measuring-​out feature to the verbal phrase: (5)

a. # DRINK WATER N EED 5 MIN U TE (Intended: ‘Drinking water takes 5 minutes.’)    (ASL, Rathmann 2005: 110) b. DRINK WATER EXTEN T +hold N EED 5 MINUTE ‘Drinking this much water requires 5 minutes.’    (ASL, Rathmann 2005: 110)

5. Combinability with durative ‘for an hour’-​ type vs. delimiting ‘in an hour’-​ type adverbials (no ASL equivalent is available, as the temporal adverbials do not differ consistently along the delimiting/​durative dimension). (6)

a. NORA b. # NORA

S LEPT FOR AN H OU R S LEPT IN AN H OU R

As with the previous test, [+durative] verbs (in (6a) the state verb ‘sleep’) combine with ‘for an hour’, but are somewhat infelicitous, though not strictly ungrammatical, with ‘in an hour’ (sentence (6b) could mean ‘It took an hour for Nora to fall asleep’, interpreting the adverbial as measuring-​out time to the onset of state). On the other hand [-​durative] verbs, such as achievement ‘fall’, are infelicitous (pragmatically unlikely) with ‘for an hour’-​type adverbials, although some informants can interpret the sentence as coerced (in Alice in Wonderland, Alice kept falling for a long time!). 6. Entailment test for temporal reference (either telicity or initiation time-​reference). The original version of the test (Dowty 1979) relied on different entailments licensed for past tense by atelic (cf. (7a)) and telic verbs and verb phrases (cf. (7b)) used in the continuous/​progressive (viewpoint aspect): 197

198

Evie A. Malaia & Marina Milković

(7)

a. b.

→ S EAN DROVE THE CAR →​/→ ​ SEAN RAN A MILE

SEAN WAS D RIVIN G TH E CAR SEAN WAS RU N N IN G A MILE

In languages where continuous/​progressive and past are not clearly marked, entailments resulting from predicate combinability with temporal adverbials such as ALMOST (Smith 2004), or STILL (Rathmann 2005) can be used: (8)

a. # J OH N WIN G AME. S TILL WIN (Intended: ‘John won the game. He is still winning it.’) b. J OH N COU G H -​O N CE. S TILL COU GH-​R EPEATED ‘John coughed. He is still coughing.’    (ASL, Rathmann 2005: 85)

While the semantic entailment’s combinability with A L M O S T can distinguish between reference time at the beginning or end-​point of the event, S T I L L only allows for a distinction between inchoative (accomplishments, achievements) and non-​ inchoative (states, activities, semelfactives) event types. Example (8a) demonstrates that S T I L L does not combine with inchoative verbs such as W I N ( as the entailment of durativity would be impossible); in (8b), combinability of S T I L L with semelfactive C O U G H -​O N C E yields a reasonable entailment of repeated semelfactive, and durative scope over multiple events. Lexical aspect is an inherent property of a verb. While a single lexical entry may, in a specific language, denote several types of eventualities with different intrinsic temporal properties (e.g., to blink can refer to either to a singular event, or to a process with multiple singularities in English), those meanings are distinguishable based on semantic tests.

9.1.2  Grammatical aspect Grammatical (viewpoint) aspect is a regularized way by which languages can describe the range of time in which the eventuality (event) is viewed. For example, a durative event with an inherent end-​point (e.g., an accomplishment) might be viewed as not having been completed within the time range of a continuous viewpoint. The term grammatical aspect is also somewhat of a misnomer: the notion of temporal viewpoint in the description of an event does not necessarily have to be regularized in a language as a grammatical category, or be represented by a morpheme. However, the grammatical, or viewpoint, aspect is secondary to Aktionsart: the internal temporal structure of an event affects the inventory of possible viewpoint aspect representations of the event. In other words, not every Aktionsart is expected to combine with all viewpoint aspects. It is important to note that both the type inventory and the encoding of event and aspect typology (e.g., lexical vs. morphological vs. phonological implementation for either viewpoint aspect or Aktionsart) can vary across languages. For example, ASL employs a phonotactic method in expressing verbal telicity: telic signs have a syllabic structure that is different from that of atelic ones (Malaia & Wilbur 2010a, 2010b; Malaia & Wilbur 2012a, 2012b, 2012c; Malaia, González-​Castillo, et al. 2015), and are processed as having a phonological distinction (Malaia et  al. 2012). Some languages allow for substantial modification of the verb’s inherent temporal semantics by what appear to be productive affixes or arguments, such that the semantics of the event structure is not restricted to the single lexical entry of the verb. For example, the verb systems of Russian and Bengali 198

199

Aspect

make use of event-​onset time, which is denoted by a morpheme (examples (9) and (10)); the use of the morpheme is, however, restricted by the semantics of the root verb (lexical prefix and suffix, respectively; see Malaia (2004) and Basu & Wilbur (2010)). (9) Cherez  pyat’  minut Artem za-​pel. after five minutes  Artem  PERF.ONSET-​sing.PAST. MASC. 3 SG ‘After five minutes, Artem began to sing.’      (Russian, Malaia & Basu 2010: 5) (10) Ram du minit-​er moddhe  kotha-​ta bol-​e uth-​lo. Ram  two  minute-​G EN within word-​C L  say-​P ERF  rise-​PAST .3. 3 SG ‘Ram said those words within two minutes.’     (Bengali, Malaia & Basu 2010: 5) Similarly, ASL can have ‘delayed onset’ (hold at the beginning) in verbs that contain onset semantics (Brentari 1998). The inventory of both lexical and grammatical aspects is also language-​specific. Across sign languages, both viewpoint aspect and Aktionsart have been shown to use free and bound morphemes, as well as non-​manual markers. The marking for the two types of aspect –​viewpoint and Aktionsart –​can co-​occur in one sign, or even be conflated (e.g., velocity-​based aspect marking in Croatian Sign Language (HZJ); see Milković (2011)). Aspect-​denoting morphemes can develop from verbs with full semantics (such as ‘finish’, or ‘continue’), or adverbs; the distribution of the verb when it has full semantics is different from the distribution of the related form when it is used as an aspect marker. In general, the analysis of the role of a morpheme in the aspectual system of a (sign) language requires use of the following criteria: 1. Distributional properties:  Can the morpheme of aspectual modification occur with any verb? Are there phonological, phonotactic, syntactic, or semantic restrictions in its distribution? Can it co-​occur with markers for tense and event structure? Is it in complementary distribution with another aspectual marker (either free, bound, or non-​manual)? If the language has several markers for a specific aspect (e.g., perfective) in its inventory, they are typically found in complementary distribution due to phonological or event structure-​based restrictions. 2. Consistency of correspondence between form and meaning in the specific language/​ interpretation across signers. In sign languages that are continuously changing, grammaticalization of a morpheme can occur over the course of 20–​40 years (cf. the analysis of Al-​Sayyid Bedouin Sign Language (ABSL); Sandler et al. (2005)).

9.2  Viewpoint aspect in sign languages Viewpoint aspect describes the speaker’s perspective, or her/his intent as to how to view the event. Comrie (1976: 4) has formulated the two aspectual options available simply as:  “the perfective looks at the situation from outside […], the imperfective looks at the situation from the inside”. The important component here is the optionality, or the speaker’s choice –​i.e., the same event can be presented either from outside (using perfective aspect), or from the inside (using imperfective/​continuous aspect). These options are, however, constrained by the lexical aspect:  e.g., the internal quantification of the event determines the possibilities of the external viewpoint. 199

200

Evie A. Malaia & Marina Milković

The relationship between tense and aspect can be difficult to describe outside the tense-​aspectual system of a specific language. Reichenbach (1947) developed an abstract system that could characterize the relationship between the time of event occurrence and the time of speech (making the utterance), and characterized aspectual relationships using the concept of reference time –​the time the speaker intends to refer to as an ‘anchor’ in the utterance. Reference time of an event is subjectively determined, in contrast with speech time and event time (both of which are objective). The use of this system allows us to distinguish, for example, between simple past and present perfect in English, as in I saw the cat vs. I have seen the cat. In both sentences, event time is in the past, in relation to speech time. However, in the simple past the event is considered from some moment in the past; in the present perfect, however, the event (of seeing) is considered in its reference to the present –​the speech time. We will need to use this system as we consider use of markers of viewpoint aspect in sign languages.

9.2.1  Free aspectual markers Free aspectual markers of perfective and continuous aspect are the two most often noted across sign languages. This does not necessarily mean that viewpoint (grammatical) aspect is restricted to these two meanings. It is more likely that this inventory is a reflection of the most typical paths of grammaticalization for aspectual morphemes with these meanings: from a verb such as F I N I S H , or from an adverb like A L R E A DY for perfective aspect, and from verbs such as T RY, C O N T I N U E , or from an adverb such as N OT-​Y E T for continuative. The exact aspectual meaning of the marker is, however, language-​specific, and cannot be inferred from the (prior) lexical meaning of the morpheme (cf. use of N OT-​Y E T as a negative perfective marker in Greek Sign Language (GSL)). Use of such markers has been described in Italian Sign Language (LIS, Zucchi et al. 2010), Hong Kong Sign Language (HKSL, Tang 2009), Israeli Sign Language (ISL, Meir 1999), ASL (Fischer & Gough 1978), Croatian Sign Language (HZJ, Milković 2011), Greek Sign Language (GSL, Sapountzaki 2005), Sign Language of the Netherlands (NGT, Hoiting & Slobin 2001), Turkish Sign Language (TİD, Zeshan 2003a; Dikyuva 2011; Karabüklü 2016), British Sign Language (BSL, Brennan 1983), Swedish Sign Language (SSL, Bergman & Dahl 1994), German Sign Language (DGS, Rathmann 2005), and Indo-​Pakistani Sign Language (IPSL, Zeshan 2003b). In the following, examples are presented of the use and distribution of these morphemes for several unrelated sign languages. LIS. Zucchi et al. (2010) report aspectual use of the verb DONE . Note that in example (11a), the position of D ON E as full lexical verb is before the main verb EAT , whereas in its aspectual use in (11b) it follows the main verb. (11) a.

GIANN I CAK E D ON E EAT

‘Gianni has finished eating the cake.’ b.

GIANN I H OU S E BU Y D ON E

‘Gianni has bought a house.’   

(LIS, Zucchi et al. 2010: 200)

HKSL. Tang (2009) shows that aspectual FINISH , which encodes termination or completion of an action, is constrained in its use by event type of the main verb: specifically, it occurs with event types that have dynamic or durational components (activities,

200

201

Aspect

achievements, accomplishments, semelfactives), as in (12a), but notably not with states, as the ungrammaticality of (12b) shows. (12) a.

IX-​D ET BOY CRY FIN IS H , G O H OME

‘After the boy had cried, he went home.’ b. * IX WOMAN D IS LIK E D OG FIN IS H ‘The woman has disliked dogs.’   

(HKSL, Tang 2009: 26)

ISL. Meir (1999) analyzed the use of ALREADY as a perfective marker in ISL, as opposed to that of tense, because of the co-​occurrence of ALREADY with adverbials denoting past, present, and future time. The aspectual meaning of ALREADY is that of viewing an event as fully completed, as illustrated in (13). (13) a.

B OOK IN D EX a IN D EX 1 ALREADY READ THREE-​D AY

‘It took me three days to read this book.’ b.

INDE X 1 ALREADY WRITE LETTER S ISTER POSS 1

‘I have written a letter to my sister.’   

(ISL, Meir 1999: 52, 49)

ASL uses the lexical verb FIN IS H , a head nod, or their combination to mark perfective aspect (Fischer & Gough 1999[1972]). (14) a.

INDE X 1 PAS T WALK S CH OOL

‘I used to walk to school’ b.

YOU E AT FIN IS H , WE G O S H OPPIN G

‘After you have eaten, we’ll go shopping.’                               (ASL, Fischer & Gough 1999[1972]: 68) HZJ. Milković (2011) describes a rich system of verb pairs differentiated by aspect, such that one has telic/​perfective, and the other atelic/​imperfective meaning. The morphemes that distinguish verbs in the pair depend on the morpho-​phonological and syntactico-​ semantic properties of the root. The most typical way to create the telic and perfective form from the atelic and imperfective form of the same verb is by speeding up the root movement. This, however, is a bound morpheme, and as such it is described in Section 9.2.2. For the verbs that do not allow morphemic change by motion acceleration, either compositional formation (use of a combination of morphemes that is not productive outside of a short list of verbs), or suppletive formation (different lexical roots in semantic/​ aspectual pairing –​also detailed in the following section) are possible. As an example of compositional formation of perfective aspect, in (15a), FINISH (G OTOVO) , comes after the atelic/​ imperfective verb HAVE-​B REAKFAST , as a marker of perfectivity, indicating the completion of an action. Similarly, in (15b), the adverb AL RE ADY ( V E ć) , comes before the same imperfective verb, rendering the verb phrase telic/​perfective. (15) a.

NEV ER H AVE BREAK FAS T ipfv … TODAY HAVE-​B REAKFAST ipfv FINISH ‘I almost never have breakfast. Today I had breakfast.’ b. … T ODAY ALREADY H AVE-​B REAK FAST ipfv ‘Today I had breakfast.’        (HZJ, Milković 2011: 59, 60)

201

202

Evie A. Malaia & Marina Milković

GSL. For GSL, Sapountzaki (2005) describes the inventory of perfective markers as consisting of aspectually used BEEN , and two negative markers, namely, verbal NOT-​ B E E N , and adverbial N OT-​Y ET . NGT. Hoiting & Slobin (2001) describe a system in which a free aspectual marker occurs in complementary distribution with a bound inflection for continuous/​habitual aspect. The complementary distribution of the free vs. bound aspectual morpheme is determined by phonological constraints: signs that contain internal lexical motion or include body contact cannot undergo aspectual inflection, and therefore occur with the free sign T HROUGH , as illustrated in (16). (16)

INDEX 3 TRY TH ROU G H ++++

‘He tried continuously /​tried and tried and tried.’   (NGT, Hoiting & Slobin 2001: 129) TİD. Dikyuva (2011) and Karabüklü (2016) identified a rich system of manual and non-​ manual aspectual markers in TİD. Manual markers include the signs GO (GITMEK ), which can be modified to reflect the continuative aspect and the completive aspect (Dikyuva 2011: 53), and completive aspect marker B i̇ T ‘finish’. In their use, both manual and non-​ manual markers of aspect in TİD interact with the internal event structure of verb signs (see Section 9.2.2 on bound markers for details).

9.2.2  Bound markers of aspect The use of dynamic and spatial means for the expression of grammatical aspect has attracted substantial interest at least since Fischer (1973). Klima & Bellugi (1979) describe 15 different productive (morphemic) modulations in ‘dynamic qualities and manners of movement’ in ASL, including reduplication, rate of signing, tension, pauses in cycles, etc. The difficulty, however, arose in identifying purely aspectual vs. non-​aspectual inflectional morphemes on the verbs. Further analyses focused on identifying overlaps in form and meaning, separating dynamic expression of event structure/​Aktionsart expressed by dynamic means, from viewpoint aspect, and identifying phonological and syntactic restrictions on the distribution of these markers (Warren 1978; Anderson 1982; Wilbur 1987; Rathmann 2005; Wilbur 2005; Wilbur 2009). Owing to the richness of inflectional possibilities in sign languages (Emmorey 1996; Malaia & Wilbur 2014) there is a debate as to the inventory of bound aspectual markers. The inflectional markers that have been attested across multiple sign languages include continuous, iterative2, and perfective marking. Like the free markers, bound aspectual morphemes (inflections) have been noted to combine with non-​manuals, and their distribution is typically restricted by phonological properties of verb signs. A number of them in different sign languages are presented next. ASL. In ASL, reduplication is frequently used as a bound marker of aspect. The base of reduplication (the part of the sign that is then copied), and the copy (or reduplicant, in speech), are typically similar in the overall form, although particular spatial/​temporal features of the copy (e.g., its shape and stress, as compared to the base sign) can differ. Both the form and the meaning of potential reduplication in ASL verbs are determined by the interaction of verbal event structure (i.e., telic vs. atelic), and the phonological form of the base. Wilbur (2009) provides the following classification based on the interaction between the prosodic prominence of base and copy and (a)telicity of the root in aspectual reduplication in ASL: 202

203

Aspect

1. If the prosodic prominence of the copy is equal to that of the base, the aspectual inflection is interpreted as habitual for telic roots, and durative for atelic ones. 2. If the prosodic prominence of the copy is less than that of the base, the aspectual inflection is interpreted as incessant for telic roots; there is no evidence for use of atelic roots with a less-​prominent copy. 3. If the prosodic prominence of the copy is greater than that of the base, the aspectual inflection is interpreted as iterative for telic roots, and continuative for atelic ones. BSL. Sutton-​Spence & Woll (1999) note the use of an extended hold to express continuative aspect in signs that do not have path movement in BSL, such as LOOK and HOLD . Iterative aspect is expressed by reduplication of path movement. NGT. Hoiting & Slobin (2001) note that the elliptical modulation functioning as inflection marker of continuative aspect is combined with non-​manual markers in NGT, namely, puffed cheeks and/​or pursed lips, with a slight blowing gesture. SSL. Bergman (1983) notes that continuous (‘durative’) aspect can be denoted with cyclical arc movement of the head, while the hands do not move. Iterative aspect is described by Bergman & Dahl (1994) as marked by repeated short movements (fast reduplication). Similar inflectional markers of aspect have also been noted in Spanish Sign Language (LSE, Cabeza Pereiro & Fernández Soneira 2004), IPSL (Zeshan 2003), Nicaraguan Sign Language (Senghas 1995), and HZJ (Milković 2011). HZJ. The most productive way to create a telic and perfective form from the atelic and imperfective form of the verb is by speeding up of the root movement; verb pairs such as čITATI/​P RO čITATI ‘to read/​to have read’ or BRISATI/​O BRISATI ‘to erase/​to have erased’ differ in the speed of hand motion (see Figure 9.2; details for quantitative motion-​capture analysis in Malaia et al. (2013)).

(a) BRISATI ‘erase’

(b) OBRISATI ‘to have erased’

Figure 9.2  HZJ aspectual verb pair: (a) B R ISAT I ipfv; (b) O BR I S ATI pfv

203

204

Evie A. Malaia & Marina Milković

Atelic/​imperfective can be also formed from a telic/​perfective form of the verb by using reduplication, such as in K U PITI/​K U POVATI ‘to have bought/​to buy’, or DAROVATI/​ DARIVATI ‘to have given, to donate/​to be giving’ (Milković 2011) –​for illustration of this process, see Figure 9.3.

[extension] (a) DAROVATI ‘to have given, to donate’

[extension]

repeat: [extension]

(b) DARIVATI ‘to be giving, to give’

Figure 9.3  HZJ aspectual verb pair: (a) DA ROVATI ipfv; (b) DA R I VATI pfv

A small group of verbs use different root morphemes (i.e., signs differing in handshape, location, orientation, and kinematics) to convey contrasting telic-​aspectual meanings with similar semantics, such as TRAžITI/​N A ćI ‘to seek/​to find’ (cf. Figure 9.4), and PUTOVATI/​ STI ćI ‘to travel/​to arrive’.

(a) TRAŽITI ‘to seek’

(b) NAĆI ‘to find’

Figure 9.4  HZJ aspectual verb pair: (a) T R A žIT I; (b) NA ćI

204

205

Aspect

In the general context of HZJ, where telicity is denoted by a productive morpheme, these pairs can be considered suppletive versions of telic-​atelic verb pairs. In other sign languages, where telicity marking is not a productive morpheme (e.g., ASL), these are considered independent lexical items. TİD. TİD has a well-​described system of bound non-​manual markers of aspect, which have complex interactions both with the internal event structure of verb and manual (free) markers of aspect (cf. Dikyuva 2011; Karabüklü 2016). Non-​manuals include markers of completive aspect (‘bn’, performed by sticking the tongue out slightly through the center of the mouth), the continuative aspect (‘lele’, which consists of protruding the tongue slightly between the teeth, and flicking it up and down repeatedly and quite rapidly) and inceptive aspect (‘ee’, performed by gritting the teeth and pulling back the corners of the mouth). For example, the non-​manual marker of continuative aspect ‘lele’ does not appear to be used with telic verbs (Dikyuva 2011:  52). The manual aspect marker B İT ‘finish’, on the other hand, cannot be used with achievement verbs such as WASH, SLICE, RE AD (Karabüklü 2016:  129). Interestingly, in this case the manual and non-​ manual completive markers in TİD differ in scope, such that non-​manual ‘bn’ is compatible with the meaning of termination of an activity (cf. (17b,d)), while B İT requires an event to be completed (cf. (17a) and (impossible) (17c)). (17) a.

IX 1 BABY WAS H B İT

‘I (have) washed the baby.’  

b. c.

‘I (have) washed the baby.’ * IX 1 BABY S WIN G B İT (Intended: ‘I (have) swung the baby.’)  

d.

bn

IX 1 BABY WAS H

bn

IX 1 BABY S WIN G

‘I (have) swung the baby.’       

(TİD, Karabüklü 2016: 129)

9.3  Event structure and reference time representation in sign languages Segmentation of continuous reality into separate events is the basis of everyday cognition. The use of dynamic (object velocity) and scene-​changing cues in visual event segmentation and memory retrieval is well-​described in perceptual and cognitive psychology (Zacks et al. 2009; Malaia, González-​Castillo, et  al. 2015; Malaia, Wilbur, et  al. 2015). However, the link between perceptual event segmentation and linguistic events has only recently come into the main arena of linguistic research. Linguistically, presence or absence of temporal reference in the event description (verb’s Aktionsart or event structure) is the primary determinant as to how an event can be further described using viewpoint aspect. Experimental evidence suggests that the processing of visual segmentation of events and determination of linguistic event structure/​Aktionsart are rooted in similar psychological mechanisms (Malaia, Wilbur, et al. 2008; Malaia et al. 2009; Strickland et al. 2015). The connection between predicate semantics and kinematics (dynamics of hand motion) in sign languages was first formulated as the Event Visibility Hypothesis (Wilbur 2008: 229): in the verbal domain, the path movement of the predicate signs indicates elapsed time (t) of an event (e), […] and […] phonological end-​marking of the movement 205

206

Evie A. Malaia & Marina Milković

reflects the final state of telic events (ea). Furthermore, the predicate stops at points (p) in space to indicate individual semantic variables (x).3 Most of the earlier analyses of predicates in sign languages did not make a distinction between temporal structure of the eventuality and viewpoint aspect. This partially accounts for the high number of aspectual distinctions in ASL noted in the earlier literature, which deemed as ‘aspectual’ all spatial-​dynamic means of modifying the basic temporal makeup of eventualities.

9.3.1  Markers of event structure As proposed in the Event Visibility Hypothesis, the event and argument structure of the verbs in sign languages can be modified by means of changes in movement path, duration, pauses, and parameters of reduplication (speed, path, etc.), as well as changes in velocity and acceleration to portions of motion in the sign. Modifications of the verb sign that affect its event structure would lead to changes of how the sign can be used in syntactic structures (application of viewpoint aspect and agreement markers). The difficulty in identifying a cross-​linguistically applicable physical inventory of event structure modifiers is that the same physical marker can be recruited to fulfill different roles in the phonological-​semantic systems of different sign languages. For example, ASL and HZJ differ in the ways that Aktionsart is expressed in the structure of the language. In ASL, telic events are marked at the semantics-​phonology interface. Atelic and telic verb signs in ASL differ in whether the two timing slots in sign-​syllables contain the same (atelic) or different (telic) setting, orientation, aperture, and directionality of the movement path, as shown in Figure 9.5, based on Brentari’s (1998) prosodic model. The change in velocity –​i.e., acceleration-​deceleration pattern within the sign –​results in a bipartite structure of the sign. In ASL, the change in motion profile is part of the lexical root and is not productive. HZJ, on the other hand, has grammaticalized expression of verbal telicity in a productive (suffix-​like) acceleration of dominant hand motion, which is generally productive (with a few exceptions). In HZJ telicity is also merged (although not for all verb signs) with aspect, in the manner typical of the surrounding Slavic languages (Milković & Malaia 2010; Milković 2011).

Atelic verb signs

Telic verb signs

Figure 9.5  Representation of atelic and telic verb signs in ASL (based on Brentari’s (1998) prosodic model)

206

207

Aspect

ASL. Rathmann (2005) suggested that multiple sign-​internal inflections, which had been typically viewed as grammatical aspect due to their productivity in sign language (e.g., Klima & Bellugi 1979), should be considered event structure-​internal modifiers. Those include ‘unrealized inceptive’ (Liddell 1984), ‘delayed completive’ (Brentari 1998), and ‘resultative’ (Wilbur 2003). Grose (2008) further developed a feature-​based taxonomy of event and argument structure in ASL verbs, which includes an account of syntactic and phonological behaviors in classifier, agreeing, and plain predicates (see Grose (2008) for details). Austrian Sign Language (ÖGS). Schalber (2006) shows that ÖGS, like ASL, uses morphemes to mark telic and atelic event structures, but phonological realizations of these morphemes are language dependent. In ÖGS, mouth non-​manuals are sensitive to the event structure. These non-​manuals divide into two types:  (i) continuous non-​manuals (NMs), composed of a single facial posture which functions as an adverbial modifier of the event (positional NMs), and (ii) discontinuous transition, composed of a single abrupt change in the position of the articulator, which appears to emphasize the initial or final portion of the event structure (transitional NMs). While transitional NM use in ÖGS is restricted to telic events (example (18), Figure  9.6), positional-​mouth NMs may occur with both telic and atelic events (example (19), Figure 9.7, with an atelic event).  

(18)

KATHARINA IX AMERIK A

        pf

FLY:[ direction]

‘Katharina flew to the USA.’      

(ÖGS, Schalber 2006: 225)

Figure 9.6  Transitional NM (discontinuous mouth gesture of abrupt exhaling, ‘pf’) with telic event structure (© John Benjamins, reprinted with permission)          

(19)

WOM AN IX PLAN E  

     

ph

G O-​B Y-​P LAN E :[tracing]

‘The woman is going by plane.’ 

(ÖGS, Schalber 2006: 225)

Figure 9.7  Positional NM (continous mouth gesture, ‘ph’) with atelic event structure (© John Benjamins, reprinted with permission)

207

208

Evie A. Malaia & Marina Milković

Similar use of non-​ manuals has been identified in both ASL and HZJ (Dukić et al. 2010). TİD. Zeshan (2003) describes a single accentuated movement, sometimes accompanied by a head nod or forward movement of the torso, which appears to be a morpheme that can occur simultaneously with free perfective aspect markers in TİD. The morpheme has a phonological restriction: it does not occur with verb signs that have no path movement, and is possibly an equivalent of a telicity marker in TİD.

9.3.2  Experimental investigations of aspect and event structure in sign languages There is a dearth of research in acquisition of reference time and aspect in sign languages, although in spoken languages the ability to infer telicity from visual and linguistic data has been identified as crucial for normal language acquisition (Leonard 2015). An account of two children learning HKSL indicates that they appear to first acquire a conflated notion of telic-​perfective-​past (Tang 2009), before learning the full sign-​language specific system of event structure and viewpoint aspect. Motion-​capture investigations of dynamic representation of event structure and aspect in sign languages indicate that signers do recruit physical properties of visual motion to convey verbal telicity (Wilbur & Malaia 2008a; Malaia & Wilbur 2012a, 2012b). Analysis of motion-​capture data for verbal predicates in two unrelated sign languages, ASL and HZJ, has shown that signers of both languages systematically use velocity of hand motion, as well as the derivative of velocity –​deceleration of the dominant hand –​for distinguishing telic vs. atelic verb classes during sign production (Malaia, Borneman, et al. 2008; Malaia & Wilbur 2008a, 2008b; Malaia et al. 2013). Signers in both ASL and HZJ appear to rely on the perceptual ability inherent to the psychological mechanism of event segmentation, i.e., evaluation of comparative speed and acceleration of biological motion. Strickland et al. (2015) further demonstrated that non-​signers are also capable of inferring the basic temporal makeup of events from visual observation of individual signs. Since sign languages do recruit physical properties of events for linguistic purposes, they can regularize (or grammaticalize) and incorporate those features in different language modules, as illustrated by comparisons of event structure markers in ASL, HZJ, and ÖGS. Moreover, while it is clear that some physical properties of overt events are recruited by signers of different sign languages, the full inventory of dynamic properties (so far including sign duration, velocity, and acceleration of the dominant hand) requires further investigation. Mathematically, the complexity of motion in ASL has been shown to contain more information than everyday human motion across the full range of recorded speeds of motion (from 0.1 to 15 Hz; Malaia 2014; Malaia et al. 2016; Malaia et al. 2017; Borneman et al. 2018). Thus, analyses of motion in sign languages in general, and application of event visibility to both Aktionsart and aspect, are far from complete.

9.4  Conclusion In this chapter, we have reviewed existing research on the means of expressing various types of aspect across a variety of unrelated sign languages. The temporal properties of an event in sign languages (lexical aspect) tend to be represented as visually overt (Strickland et al. 2015; Malaia 2017; Blumenthal-​Dramé & Malaia 2019), and are implied as part of 208

209

Aspect

the verb’s lexical entry. The systems for denoting grammatical aspect, and the relationship between utterance time, reference time, and event time are less researched, and present a fascinating ground for further investigations. While empirical research on the relationship between types of aspect is complicated by the variability of representations among sign languages, the spatial means of representing aspectual-​temporal relationships in sign languages provide rich ground for cross-​linguistic research into human capacity for conceptualizing time and space.

Acknowledgments Preparation of this chapter was partially funded by Grant #1734938 from the U.S. National Science Foundation and European Union Marie S. Curie FRIAS COFUND Fellowship Programme (FCFP) award to Evie A. Malaia.

Notes 1 The difficulty for feature-​based approaches lies in the taxonomic representation of multiple conflated and non-​conflated argument roles that can be occupied by the argument in the same syntactic position. 2 Iterative aspect can be considered a sub-​category of continuous aspect, or an aspect by itself –​the relative taxonomy often depends on whether iterativity is represented grammatically or semantically in the language under consideration. In turn, iterative aspect is sometimes viewed as subsuming a distinct subcategory of habitual aspect, although the semantics of the two are distinct: the habitual aspect “describes a situation which is characteristic of an extended period of time” (Comrie 1976: 27–​28), while the iterative aspect’s meaning conveys ‘repeated occurrences of the same situation’ (Declerck 1991: 277). Habitual aspect, in its turn, might include ‘frequentative’ and ‘incessant’. 3 See Wilbur (2005) for the analysis of the complex predicate (distributive embedded in the iterative), illustrating the use of spatial/​dynamic means to express the components of event-​internal structure, such as reference time and argument roles.

References Anderson, Lloyd. 1982. Universals of aspect and parts of speech:  Parallels between signed and spoken languages. Typological Studies in Language 1. 91–​114. Basu, Debarchana & Ronnie Wilbur. 2010. Complex predicates in Bangla: An event-​based analysis. Rice Working Papers in Linguistics 2. Bellugi, Ursula & Edward S. Klima. 1979. The signs of language. Cambridge, MA:  Harvard University Press. Bergman, Brita. 1983. Verbs and adjectives: Morphological processes in Swedish Sign Language. In Jim Kyle & Bencie Woll (eds.), Language in sign: An international perspective on sign language, 3–​9. Beckenham: Croom Helm. Bergman, Brita & Östen Dahl. 1994. Ideophones in sign language? The place of reduplication in the tense-​aspect system of Swedish Sign Language. In Carl Bache, Hans Basbøll, & Carl-​Erik Lindberg (eds.), Tense, aspect and action: Empirical and theoretical contributions to language typology, 397–​422. Berlin: Mouton de Gruyter. Blumenthal-​Dramé, Alice & Evie A. Malaia. 2019. Shared neural and cognitive mechanisms in action and language:  The multiscale information transfer framework. Wiley Interdisciplinary Reviews: Cognitive Science 10(2). e1484. Borneman, Joshua D., Evie A. Malaia, & Ronnie B. Wilbur. 2018. Motion characterization using optical flow and fractal complexity. Journal of Electronic Imaging 27(05). 1. Brennan, Mary. 1983. Marking time in British Sign Language. In Jim Kyle & Bencie Woll (eds.), Language in sign: An international perspective on sign language, 10–​31. Beckenham: Croom Helm. Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA: MIT Press.

209

210

Evie A. Malaia & Marina Milković Cabeza Pereiro, Carmen & Ana Fernández Soneira. 2004. The expression of time in Spanish Sign Language (LSE). Sign Language & Linguistics 7(1). 63–​82. Comrie, Bernard. 1976. Aspect:  An introduction to verbal aspect and related problems. Cambridge: Cambridge University Press. Declerck, Renaat. 1991. Tense in English: Its structure and use in discourse. London: Routledge. Dikyuva, Hasan. 2011. Grammatical non-​ manual expressions in Turkish Sign Language. Preston: University of Central Lancashire MA thesis. Dowty, David. 1979. Word meaning and Montague Grammar. Dordrecht: Reidel. Dukić, Lea, Marina Milković, & Ronnie B. Wilbur. 2010. Evidence of telicity marking by nonmanuals in HZJ. Poster presented at Theoretical Issues in Sign Language Research (TISLR 10), Purdue University. Emmorey, Karen. 1996. The confluence of space and language in signed languages. In Paul Bloom, Mary Peterson, Lynn Nadel, & Merrill F. Garrett (eds). Language and space, 171–​220. Cambridge, MA: MIT Press. Fischer, Susan D. 1973. Two processes of reduplication in the American Sign Language. Foundations of Language 9(4). 469–​480. Fischer, Susan & Bonnie Gough. 1978. Verbs in American Sign Language. Sign Language Studies 18(1).  17–​48. Fischer, Susan & Bonnie Gough. 1999[1972]. Some unfinished thoughts on FI N I S H . Sign Language & Linguistics 2(1). 67–​77. Grose, Donovan. 2008. The geometry of events:  Evidence from English and American Sign Language. West Lafayette, IN: Purdue University PhD dissertation. Grose, Donovan, Ronnie B. Wilbur, & Katharina Schalber. 2007. Events and telicity in classifier predicates: A reanalysis of body part classifier predicates in ASL. Lingua 117(7). 1258–​1284. Hoiting, Nini & Dan I. Slobin. 2001. Typological and modality constraints on borrowing: Examples from the Sign Language of the Netherlands. In Diane Brentari (ed.), Foreign vocabulary in sign languages, 121–​137. Mahwah, NJ: Lawrence Erlbaum. Jackendoff, Ray. 1996. The proper treatment of measuring out. Natural Language and Linguistic Theory 14. 305–​354. Karabüklü, Serpil. 2016. Time and aspect in Turkish Sign Language (TİD): Manual and nonmanual realization of ‘finish’. Istanbul: Boğaziçi University MA Thesis. Krifka, Manfred. 1992. Thematic relations as links between nominal reference and temporal constitution. In Ivan Sag & Anna Szabolcsi (eds.), Lexical matters, 29–​54. Stanford, CA:  CSLI Publications. Leonard, Laurence B. 2015. Time-​ related grammatical use by children with SLI across languages: Beyond tense. International Journal of Speech-​Language Pathology 17(6). 545–​555. Liddell, Scott. 1984. Unrealized inceptive aspect in American Sign Language:  Feature insertion in syllabic frames. In Joseph Drogo, Veena Mishra, & David Testen (eds.), Papers from the Twentieth Regional Meeting of the Chicago Linguistic Society, 257–​270. Chicago, IL: Chicago Linguistic Society. Malaia, Evie A. 2004. Event structure and telicity in Russian: An event-​based analysis for the telicity puzzle in Slavonic languages. Ohio State University Working Papers in Slavic Studies 4.87–​98. Malaia, Evie A. 2014. It still isn’t over: Event boundaries in language and perception. Language and Linguistics Compass 8(3). 89–​98. Malaia, Evie A. 2017. Current and future methodologies for quantitative analysis of information transfer in sign language and gesture data. Behavioral and Brain Sciences 40. e63. Malaia, Evie A. & Debarchana Basu. 2010. Comparative analysis of event structure in verbal morphosyntax of Russian and Bangla. Manuscript, Purdue University. Malaia, Evie A., John D. Borneman, & Ronnie B. Wilbur. 2008. Analysis of ASL motion capture data towards identification of verb type. In Johan Bos & Rodolfo Delmonte (eds.), Semantics in text processing, 155–​164. London: College Publications. Malaia, Evie A., Joshua D. Borneman, & Ronnie B. Wilbur. 2016. Assessment of information content in visual signal: Analysis of optical flow fractal complexity. Visual Cognition 24(3). 246–​251. Malaia, Evie A., Joshua D. Borneman, & Ronnie B. Wilbur. 2017. Information transfer capacity of articulators in American Sign Language. Language and Speech 61(1). 97–​112. Malaia, Evie A., Javier González-​Castillo, Cristine Weber, Thomas M. Talavage, & Ronnie B. Wilbur. 2015. Neural processing of verbal event structure: Temporal and functional dissociation

210

211

Aspect between telic and atelic verbs. In Roberto de Almeida & Christina Manouilidou (eds.), Cognitive science perspectives on verb representation and processing, 131–​140. New York: Springer. Malaia, Evie A., Ruwan Ranaweera, Ronnie B. Wilbur, & Thomas M. Talavage. 2012. Event segmentation in a visual language: Neural bases of processing American Sign Language predicates. NeuroImage 59(4). 4094–​4101. Malaia, Evie A. & Ronnie B. Wilbur. 2008a. The biological bases of syntax-​semantics interface in natural languages: Cognitive modeling and empirical evidence. In Alexei Samonovich (ed.), Biologically inspired cognitive architectures:  Papers from the Association for Advancement of Artificial Intelligence Symposium, 113–​116. Menlo Park, CA: AAAI Press. Malaia, Evie A. & Ronnie B. Wilbur. 2008b. Event Visibility Hypothesis: Motion capture evidence for overt marking of telicity in ASL. Paper presented at the Linguistic Society of America Meeting, Chicago. Malaia, Evie A. & Ronnie B. Wilbur. 2010a. Experimental evidence from sign language for a phonology-​syntax-​semantic interface. Paper presented at Language Design: Second Meeting of the Biolinguistics Network, Université du Québec à Montréal. Malaia, Evie A. & Ronnie B. Wilbur. 2010b. Representation of verbal event structure in sign languages. In Pier Marco Bertinetto, Anna Korhonen, Alessandro Lenci, Alissa Melinger, Sabine Schulte im Walde, & Aline Villavicencio (eds.), Proceedings of Verb 2010: The identification and representation of verb features, 165170. Pisa: Scuola Normale Superiore and Università di Pisa. Malaia, Evie A. & Ronnie B. Wilbur. 2012a. Telicity expression in the visual modality. In Violeta Demonte & Louise McNally (eds.), Telicity, change, and state: A cross-​categorial view of event structure, 122–​136. Oxford: Oxford University Press. Malaia, Evie A. & Ronnie B. Wilbur. 2012b. Kinematic signatures of telic and atelic events in ASL predicates. Language and Speech 55(3). 407–​421. Malaia, Evie A. & Ronnie B. Wilbur. 2012c. What sign languages show:  neurobiological bases of visual phonology. In Anna Maria Di Sciullo (ed.), Towards a biolinguistic understanding of grammar: Essays on interfaces, 265–​275. Philadelphia: John Benjamins. Malaia, Evie A., Ronnie B. Wilbur, & Marina Milković. 2013. Kinematic parameters of signed verbs. Journal of Speech, Language, and Hearing Research 56(5). 1677–​1688. Malaia, Evie A., Ronnie B. Wilbur, & Thomas Talavage. 2008. Experimental evidence of event structure effects on American Sign Language predicate production and neural processing. Proceedings from the Annual Meeting of the Chicago Linguistic Society 44(2), 203–​ 211. Chicago: Chicago Linguistic Society. Malaia, Evie A., Ronnie B. Wilbur, & Christine Weber. 2015. Event end-​point primes the undergoer argument:  Neurobiological bases of event structure processing. In Boban Arsenijević, Berit Gehrke, & Rafael Marín (eds.), Studies in the composition and decomposition of event predicates, 231–​248. New York: Springer. Malaia, Evie A., Ronnie B. Wilbur, & Christine Weber-​Fox. 2009. ERP evidence for telicity effects on syntactic processing in garden-​path sentences. Brain and Language 108(3). 145–​158. Meir, Irit. 1999. A perfect marker in Israeli Sign Language. Sign Language & Linguistics 2(1). 43–​62. Milković, Marina. 2011. Verb classes in Croatian Sign Language (HZJ): Syntactic and semantic properties. Zagreb: University of Zagreb PhD dissertation. Milković, Marina & Evie A. Malaia. 2010. Event visibility in Croatian Sign Language: Separating aspect and Aktionsart. Poster presented at Theoretical Issues in Sign Language Research (TISLR 10), Purdue University. Ramchand, Gillian Catriona. 2008. Verb meaning and the lexicon: A first phase syntax. Cambridge: Cambridge University Press. Rathmann, Christian. 2005. Temporal aspect in American Sign Language. Austin, TX: University of Texas at Austin PhD dissertation. Reichenbach, Hans. 1947. Elements of symbolic logic. New York: Free Press. Rothstein, Susan. 2004. The syntactic forms of predication. In Susan Rothstein (ed.), Predicates and their subjects, 100–​129. Dordrecht: Springer. Sandler, Wendy, Irit Meir, Carol Padden, & Mark Aronoff. 2005. The emergence of grammar:  Systematic structure in a new language. Proceedings of the National Academy of Sciences 102(7). 2661–​2665.

211

212

Evie A. Malaia & Marina Milković Sapountzaki, Galini. 2007. Free functional elements of tense, aspect, modality and agreement as possible auxiliaries in Greek Sign Language. Bristol:  University of Bristol, Centre for Deaf Studies, PhD dissertation. Schalber, Katharina. 2006. Event visibility in Austrian Sign Language (ÖGS). Sign Language & Linguistics 9(1). 207–​231. Senghas, Ann. 1995. Children’s contribution to the birth of Nicaraguan Sign Language. Cambridge, MA: MIT PhD dissertation. Smith, Carlota S. 1997. The parameter of aspect. Berlin: Springer. Strickland, Brent, Carlo Geraci, Emmanuel Chemla, Philippe Schlenker, Meltem Kelepir, & Roland Pfau. 2015. Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases. Proceedings of the National Academy of Sciences 112(19). 5968–​5973. Sutton-​Spence, Rachel & Bencie Woll. 1999. The linguistics of British Sign Language: An introduction. Cambridge: Cambridge University Press. Tang, Gladys. 2009. Acquiring F INISH in Hong Kong Sign Language. In James H-​Y. Tai & Jane Tsay (eds.), Taiwan Sign Language and beyond, 21–​47. Chia-​Yi, Taiwan: The Taiwan Institute for the Humanities, National Chung Cheng University Vendler, Zeno. 1969. Linguistics in philosophy. Ithaca, NY: Cornell University Press. Warren, Katherine Norton. 1978. Aspect marking in American Sign Language. In Patricia Siple (ed.), Understanding language through sign language research, 133–​159. New York: Academic Press. Wilbur, Ronnie B. 1987. American Sign Language:  Linguistic and applied dimensions. Boston: College Hill. Wilbur, Ronnie B. 2005. A reanalysis of reduplication in American Sign Language. In Bernhard Hurch (ed.), Studies in reduplication, 593–​620. Berlin: Mouton de Gruyter. Wilbur, Ronnie B. 2008. Complex predicates involving events, time and aspect:  Is this why sign languages look so similar? In Josep Quer (ed.), Signs of the time. Selected papers from TISLR 2004, 217–​250. Hamburg: Signum Press Wilbur, Ronnie B. 2009. Productive reduplication in a fundamentally monosyllabic language. Language Sciences 31(2). 325–​342. Wilbur, Ronnie B. & Evie A. Malaia. 2008a. Contributions of sign language research to gesture understanding: What can multimodal computational systems learn from sign language research. International Journal of Semantic Computing 2(1). 5–​19. Wilbur, Ronnie B. & Evie A. Malaia. 2008b. From encyclopedic semantics to grammatical aspects:  Converging evidence from ASL and co-​speech gestures. Paper presented at the 30th Annual Meeting of the German Linguistics Society, Bamberg. Zacks, Jeffrey M., Nicole K. Speer, & Jeremy R. Reynolds. 2009. Segmentation in reading and film comprehension. Journal of Experimental Psychology: General 138(2). 307–​327. Zeshan, Ulrike. 2003a. Aspects of Türk İsaret Dili (Turkish Sign Language). Sign Language & Linguistics 6(1). 43–​75. Zeshan, Ulrike. 2003b. Indo-​ Pakistani Sign Language grammar:  a typological outline. Sign Language Studies 3(2). 157–​212. Zucchi, Sandro, Carol Neidle, Carlo Geraci, Quinn Duffy, & Carlo Cecchetto. 2010. Functional markers in sign languages. In Diane Brentari (ed.), Sign languages, 197–​ 224. New  York: Cambridge University Press.

212

213

10 DETERMINER PHRASES Theoretical perspectives Natasha Abner

10.1  Introduction The syntactic study of determiner phrases (DPs) has focused on the optional and obligatory constituents of the DP across languages, the word order patterns among these constituents, and the morphosyntactic manifestation of relations between DP-​internal constituents. Such issues pertain to the internal structural characteristics of DPs and will be the focus of the present chapter (for questions of the external distributional patterns of DPs, see Cecchetto, Chapter 13, and Kuhn, Chapter 21; for the semantic properties of determiner phrases, see Barberà, Chapter 18, Kimmelman & Quer, Chapter 19, and Schlenker, Chapter 23; for relative clause modifiers, see Branchini, Chapter 15). Though phenomena like grammatical gender and noun classes have not been identified in any sign language (and seem unlikely, given the patterns that have emerged in documented sign languages), a cross-​linguistically robust phenomenon that has emerged is the existence of a class of noun signs that are formationally related to a verb. Thus, the discussion begins by examining what may be considered the contentful, lexical core of the DP: the noun. Section 10.3 addresses the somewhat controversial status of the determiner component in the DPs of sign languages and whether the indexical sign that frequently co-​occurs with a noun has the status of determiner. The discussion is contextualized within the broader theoretical issue of whether a determiner layer is necessary in order for a nominal constituent to have argumental status and function as a referential expression. Section 10.4 turns to word order patterns between DP constituents and how typological and theoretical claims may be ported to and informed by the study of sign languages. The structure of possessive expressions is covered in Section 10.5. Section 10.6 concludes the chapter and provides some suggested avenues of future research.

10.2  Building nouns Though DP structure exhibits a certain degree of cross-​linguistic variation, one fundamental property of DPs across languages is that they function as an extended projection of a nominal core. As in spoken languages, this nominal core of the DP in sign languages may be occupied by a simplex noun.1 This is the case with the American 213

214

Natasha Abner

Sign Language (ASL) sign APPLE or the Nicaraguan Sign Language (NSL) sign MAN (HOM B RE ) as well as their translational equivalents in English. However, it is also the case that the element at the core of the DP may have non-​nominal origins, as in the assessment (from [V assess]) or his happiness (from [Adj happy]). Across documented sign languages, an interesting pattern emerges: there is a class of formationally related nouns and verbs marked cross-​linguistically in morphophonologically similar ways. Moreover, across these sign languages, lexical category (N vs. V) appears to be evident in the surface form of the sign itself. This pattern was first observed by Supalla & Newport (1978) for ASL. They documented 100 noun-​verb pairs (e.g., SIT/​C HAIR, STAPLE/​S TAPLER, OPEN-​ DOOR/​D OOR) that are formationally (and semantically) related as well as morphophonologically marked for their nominal or verbal status. Though verbal members (e.g., SIT ) of these pairs vary in form depending on the lexical aspect of the predicate, their nominal members (e.g., CH AIR ) exhibit formational consistency: repeated movement that is both short (relative to the verb) and tense (termed ‘restrained manner’ by Supalla and Newport). This contrast is shown in Figure 10.1.

Figure 10.1  The single, long movement of the verbal form S I T (left) as compared to the small, repeated movements of the nominal form CHA IR

Subsequent research on a variety of sign languages has documented similar, though not identical, patterns. The ‘restrained manner’ of movement is observed in noun-​verb pairs in Jordanian Sign Language (LIU, Hendriks 2008). Like ASL, Russian Sign Language (RSL, Kimmelman 2009), Australian Sign Language (Auslan, Johnston 2001), LIU, and NSL (as used by later cohorts, Abner et al. (2019)) exhibit a pattern wherein nouns are marked by repetition of movement. Moreover, Abner et al. find that all cohorts of NSL users consistently use smaller movements with nouns. Smaller movements also characterize nouns (versus verbs) in both child (Goldin-​Meadow et al. 1994) and adult homesign (Abner et al. 2019), the self-​created gestural communication systems that are arguably the first stage of sign language genesis. Larger movements also characterize verbs (versus nouns) in RSL, Auslan, and Italian Sign Language (LIS, Pizzuto & Corraza 1996). There is, however, a degree of cross-​linguistic variation. For example, it is the elimination of movement repetition that may distinguish nouns from verbs in Turkish Sign Language (TİD, Kubus 2008) and the Sign Language of the Netherlands (NGT, Schreurs 2006). There are also morphophonological indicators of category that are used only in some sign languages. In LIS, verbs also exhibit a longer duration than nouns, as has also been found in Austrian Sign Language (ÖGS, Hunger 2006) and TİD, while in Nicaraguan 214

215

Determiner phrases

homesign and NSL, the presence (verbs) or absence (nouns) of the non-​dominant hand as a ‘base hand’ may distinguish category (Abner et al. 2019). Nevertheless, this pattern may be generalized as follows:  across sign languages, movement is used to distinguish members of related noun-​verb pairs and, moreover, it is movement reduction that tends to characterize the nominal member of these pairs.2 In the studies summarized above, the nominal member of the pairs is also characterized by its concrete meaning. Abner (2017) extends Supalla and Newport’s ASL paradigm to include a class of abstract, result-​denoting nominals (e.g., PARTICIPATION ). In doing so, she presents an analysis that may provide an explanation for the noted cross-​linguistic similarity. Appealing to Wilbur’s (2003) Event Visibility Hypothesis (for details, see Malaia & Milković (Chapter 9)), Abner observes that telic predicates in ASL have two discrete morphophonological components (associated with the morphosyntactic components of telicity in language structure more generally):  a spatio-​temporal end-​marking that encodes event telicity and a spatial path movement that encodes event duration. She then observes that the verbal structure underlying these derived nominals contains only the structure associated with end-​marking (telicity). Crucially, because the spatial path movement encoding event duration is absent from the structure of the derived nominal but present in the related verbal form, the nominal surfaces with a short movement. In addition to the converging evidence presented from other morphosyntactic domains, Abner shows that deriving these nouns from the verbal projection of telicity also provides an explanatory account of their semantic properties. If the Event Visibility Hypothesis is active across sign languages, as Wilbur proposes that it is, and if languages, signed and spoken, share a computational engine (similar nominalization structures have been proposed for spoken languages), then it is unsurprising that nominalization processes are formationally similar across sign languages. Their similarity is merely an epiphenomenal consequence of the interaction of these two phenomena:  event structure properties are morphophonologically similar across sign languages and morphosyntactically manipulated in similar ways across human languages, independent of modality.

10.3  Building… determiner phrases? The nominalization process Abner (2017) proposes applies to small constituents projected low in the verbal domain. Interestingly, it was nominalization of larger verbal constituents (e.g., so-​called Poss-​ing gerunds of the Bruno’s buying a trailer type) that motivated early theoretical analyses of DPs. Abney (1987) argued that nominal constituents (a neutral/​pre-​theoretical term adopted here) surface syntactically as part of an extended functional projection headed by a D(eterminer), on a par with the functional architecture built around (and containing) the verbal projections of the sentential domain. The proposed DP status of nominal constituents provides a straightforward explanation for the observation that, in many languages, an argumental nominal obligatorily co-​occurs with a functional element that can be thought to occupy the D-​head of this DP structure (e.g., *(The/​a) monkey dances). As for those (rather numerous) ‘article-​less’ languages that do not exhibit overt evidence of the D-​head, two analyses are possible: either the nominal constituents of these languages do have this DP structure, albeit with a covert or silent D-​head, or the nominal constituents of these languages lack overt determiners because they do not have DP structure. The latter possibility has gained popularity and traction in recent years, with a number of researchers proposing that the NP/​DP structure of nominal constituents is a parameterized property 215

216

Natasha Abner

of human language and offering a number of diagnostics to identify the setting of this parameter in any given language. The most surface apparent of these diagnostics is whether (or when) articles appear with nominal constituents in a given language. Here, however, the data from sign languages is diagnostically problematic. Like languages with an ‘NP setting’ of this parameter, all documented sign languages permit article-​ less bare nouns in argument positions. However, all documented sign languages also permit argumental nominal constituents that contain an indexical pointing sign alongside overt nominal material (in addition to standalone pronominal indexical pointing signs). A cross-​linguistic sampling of bare nominals as well as nominals with an overt indexical pointing sign is provided in (1) and (2), respectively; as illustrated in (1d) and (2d), this alternation is also attested in homesign (transcriptions have been adapted to match, and the DP is bracketed and bolded; IPSL = Indo-Pakistani Sign Language, ISL = Israeli Sign Language). (1) a.

[WOMAN] S AD

‘The woman is sad.’    b.

(IPSL, Zeshan 2003: 169)

[CHILD] TABLE BU Y

‘A child buys a table.’    c.

( LIS,

Mantovan 2015: 107)

[TEACHER] S ICK , LECTU RE CAN CEL

‘The teacher is sick and the lecture is canceled.’    (ISL, Meir & Sandler 2007: 165) d.

[LOLLIPOP] G IVE

‘(You) give (me) lollipop.’ (Homesign-​USA, Hunsicker & Goldin-​Meadow 2012: 739)  

(2) a.

[HEARING IXi] IX 1 

  neg

AFRAID

‘I am not afraid of hearing people.’ b.

[CHILDREN IXi] S OCCER PLAY

‘These children play soccer.’ c.

( LIS,

[BOY IXi] BU Y BOOK

‘The boy bought a book.’ d.

(IPSL, Zeshan 2003: 171) Mantovan 2015: 92)

(ISL, Meir & Sandler 2007: 113)

[TRACK IXi] PU T-​D OWN

‘(You/​experimenter) put down that track (in spot).’               (Homesign-​USA, Hunsicker & Goldin-​Meadow 2012: 752) The article-​like distribution of the pointing sign in (2)  suggests that it may be the exponence of a determiner. The questions raised by the aforementioned theoretical landscape and the observed sign language data are twofold. First, where do sign languages fall with respect to the NP or DP status of their nominal constituents? Second, within the nominal constituent structure, what role is played by the indexical pointing sign? Each of these questions are addressed in turn in the sections that follow.

10.3.1  Sign languages and the NP/​DP parameter The presence or absence of overt articles has been correlated with a number of syntactic and semantic patterns across languages. These range from the grammaticality of left branch extraction from nominal constituents to the availability of a majority (vs. 216

217

Determiner phrases

plurality only) interpretation of nominal constituents with a superlative quantifier like most (see Bošković (2008) and Corver (1992), among others). Such patterns suggest that the distinction between languages with and without articles is not merely whether the D-​ head is overt or null. Instead, these patterns suggest that languages may differ in whether the D-​head is present at all. This possibility is the substance of the NP/​DP parameter and the associated proposal that languages may vary in whether or not nominal constituents are housed within the functional DP structure Abney identified. Chierchia (1998) links this parameterization to the semantic denotation of nouns; in languages like Chinese, he argues, nouns are kind-​denoting and thus need not surface with a determiner in order to function as arguments. Longobardi (2008), however, links the permissibility of non-​ DP arguments to the absence of grammaticalized person features within languages like Japanese (see also Fukui (1988)). Detailed evaluation of sign languages with respect to these proposals has, to date (and to the author’s knowledge), only been undertaken for ASL. With respect to Chierchia’s diagnostics, Abner (2012) observes that nouns in ASL do not behave as if they are uniformly kind-denoting (as Chierchia proposes for Chinese-​type languages with argumental bare nouns). She notes, for example, that there is no generalized classifier system present in the language. Moreover, though ASL, like many (sign) languages, does not exhibit a required singular/​plural distinction, it does have morphological markers for number. These include markers that indicate dual plurals (3), trial plurals, as well as plural quantities over three (termed ‘exhaustive’ marking; see Supalla & Newport (1978) and Padden (1988) for ASL and Steinbach (2012) for an overview of plurality in sign languages). (3)

NEW SPAPER D IS CU S S ABOU T TWO D IFFERENT ELECTION plural-​dual

‘The newspaper talked about two different elections.’ 

(ASL)

Whether or not plural marking is obligatory, the presence of morphological number marking is the third piece of evidence that ASL does not function as a language with NP nominal arguments according to the criteria laid out by Chierchia. Using Longobardi’s (2008) approach to the NP/​DP status of nominal arguments, it is also relatively straightforward to conclude that ASL (and likely some other sign languages) does not pattern with NP-​only languages. Longobardi proposes that NP-​ only languages are just those languages that fail to grammaticalize person features. Since ASL exhibits at least a first/​non-​first person contrast on pronominals and in the verbal agreement system (Meier 1990), it is clearly not an NP-​only language under Longobardi’s criterion. Moreover, similar person distinctions are made in other sign languages that use a spatial reference system. Thus, any sign language that uses spatial reference in this way also likely fails to meet Longobardi’s criterion of an NP-​only language. Turning finally to the set of criteria laid out by Bošković (see Bošković (2008) and subsequent work), Bernath (2009) also concludes that ASL does not exhibit the patterns of NP-​only languages. Consider, for example, the interpretation of superlative quantifiers. Semantically, superlative quantifiers may have one of two interpretations: (i) a plurality interpretation, in which most X are Y means that more X are Y than are any other contextually relevant descriptor (e.g., Z); or (ii) a majority interpretation, in which most X are Y means that more than half of the X’s are Y’s. As illustrated in (4), both interpretations are possible in ASL.

217

218

Natasha Abner

(4) a. Context: There exist ten movies featuring Superman. André owns four of these movies, while Jeff owns only two, and Diane just one. ANDRÉ OWN MOS T S U PERMAN MOVIE

‘André owns the most Superman movies.’ [plurality, not majority] Context: There exist ten movies featuring Superman. Jeff owns copies of all 10, while André owns eight of them, and Diane owns just four.

b.

ANDRÉ OWN MOS T S U PERMAN MOVIE

‘André owns most Superman movies.’ [majority, not plurality]                                               (ASL, Bernath 2009: 8) Crucially, according to Bošković & Gajewski (2011), the availability of the majority interpretation is only evidenced in languages that have articles –​that is, languages with nominal constituents containing a DP layer. Thus, according to any of the existing approaches to NP/​DP parameterization, ASL does not behave like an NP-​only language. In recent work on verbal and nominal ellipsis, Koulidobrova (2017) provides evidence that elided nominal constituents, however, may be bare NPs in ASL. This raises the possibility that the NP or DP status of nominal arguments may be variable within a language (a takeaway that also emerges in much of the research discussed in Section 10.4). Koulidobrova’s evidence is based on the interpretation of elided arguments that are co-​referent with nominals not assigned an overt spatial locus. This is illustrated in the contrast between (5a) and (5b). Eliding an argument (marked by the Ø) results in ungrammaticality if its antecedent has a spatial locus (a-​M OT H E R i in (5a); the initial ‘a-​’ indicates spatial localization and the final subscripted ‘i’ indicates interpretation); if the argument is instead overtly realized (a-​I X in (5a)), the structure is fine. If the antecedent is not localized, eliding the argument does not result in ungrammaticality (5b).  

        t

(5) a. a-​M OTH ER i, 1-​I X D ON ’T-​K N OW WH AT { a-​I X /​*Ø} ‘Mother, I don’t know what she likes.’    

b.

LIKE plain

    t

Ø LIKE plain ‘Mother, I don’t know what she likes.’  (ASL, Koulidobrova 2017: 400, 402) M OT H ER, 1-​I X D ON ’T-​K N OW WH AT

Koulidobrova argues that in the case of the grammatical ellipsis (5b), the argument elided is a bare NP, not a DP. This suggests that spatial localization is encoded in the functional periphery of the DP domain, not as part of its nominal core, a pattern that is unsurprising given that spatial localization and the functional architecture of the DP are both standardly assumed to deal with reference assignment. Because spatial loci encode person distinctions, these patterns provide converging evidence that ASL is not a Longobardi-​ style NP-​only language. Patterns like those in (5)  also raise the important issue of the role of non-​manual components in the structure of nominal constituents (see also Wilbur, Chapter 24, and Woll & Vinson, Chapter 25). MacLaughlin (1997) and Neidle et al. (2000) argue –​again on the basis of data from ASL –​that non-​manual markers occupy the heads of functional projections. They propose that eye gaze and head tilt are the non-​manual exponence of morphosyntactic agreement in both the clausal and nominal domain. Adopting a DP 218

219

Determiner phrases

analysis of nominal constituents in ASL, they argue that these agreement markers within the nominal domain are housed on the D-​head (non-​possessive DPs) or on both the D-​ head and a dedicated DP-​internal agreement projection (possessive DPs). The resulting structural configurations generate non-​manual markings of the DP like those in (6): in (6a), both head tilt (ht) and eye gaze (eg) are directed toward the referent of the non-​ possessive DP; in the possessive DP in (6b), head tilt is directed toward the possessor and eye gaze toward the possessee.3      

(6)

a. 

   

      hti       egi

IX pro1p K N OW   [IX det-​i OLD ‘I know the old man.’

         

MAN ]DP

   hti   egj

b. [J OHN i  POS S i  FRIEN D j]dp H AVE ‘John’s friend has candy.’  

CANDY

(ASL, Neidle et al. 2000: 98, 100)

Similar patterns have also been reported for Hong Kong Sign Language (HKSL, Tang & Sze 2002), though here it seems that only eye gaze, not head tilt, plays a role. Again, for MacLaughlin and Neidle et al., the non-​manual agreement markers within the DP parallel entirely the non-​manual agreement markers within the clause, where they mark subject (head tilt) and object (eye gaze) agreement. Evidence from eye tracking, however, shows that eye gaze in the verbal domain goes along with manual agreement but does not act independently as an agreement marker (Thompson et al. 2006), suggesting that analysis of the role of non-​manual marking within the DP may also benefit from the insight offered by experimental methodologies.

10.3.2  The categorial status of pointing signs The research discussed in the previous section shows that the functional architecture of the DP is present in at least some nominal constituents in ASL and likely in other sign languages. Thus, it would seem that this group of bare noun-​permitting languages should not be characterized as one in which parameterization is set to not project the functional DP architecture. Against this theoretical and empirical backdrop, we can then begin to understand the challenge posed by the distribution of indexical pointing in sign languages. Indexical pointing seems to be universally present in repertoires of co-​speech gestures across speech communities and all documented sign languages have been observed to grammaticalize indexical pointing as part of the linguistic system (Coppola & Senghas 2010; Pfau 2011; Pfau & Steinbach 2011).4 As described earlier, pointing signs may surface in isolation with a pronoun-like function or may combine with overt nominal material. Both of these possibilities are illustrated in (7), where the subject position is occupied by a standalone IX j and the object position is occupied by an IX k co-​occurring with the nominal sign D OG .5 (7)

IX j ADOPT [ IX kD OG ]

‘They adopted the dog.’

(ASL)

219

220

Natasha Abner

Each of these possibilities is also attested in child (Hunsicker & Goldin-​Meadow 2012) and adult (Coppola & Senghas 2010) homesign; indeed, Hunsicker & Goldin-​Meadow (2012) use data like that in (2d) above to argue for constituency and hierarchy in homesign grammars. Structures like [IX k D OG ] as well as the examples in (1) and (2) lie at the heart of the debate discussed in the previous section. Going back to some of the earliest modern descriptions of sign language grammar, researchers observed that these uses of IX exhibit characteristics similar to those of determiners in spoken languages (Hoffmeister 1977; Wilbur 1979). For example, D OG in (7) is obligatorily interpreted as definite, whereas its bare noun counterpart would be compatible with both a definite and an indefinite interpretation (see Wilbur 1979; Zimmer & Patschke 1990; and MacLaughlin 1997).6 However, if bare nominals are compatible with both definite and indefinite interpretations, as they are in many sign languages, then pointing signs as markers of definiteness are, strictly speaking, optional. Moreover, save for obligatorily definite interpretations, researchers have been hard-​pressed to find clear semantic differences between the interpretation of a bare nominal and that of a nominal co-​occurring with IX . Alternations similar to (1) and (2) have been reported in other signs languages (see, among others, Quadros (1999) for Brazilian Sign Language, Tang & Sze (2002) for HKSL, and Alibašić & Wilbur (2006) for Croatian Sign Language). This ‘optionality’ is a challenge for approaches that align indexical pointing signs with definite determiners. Spoken languages, of course, exhibit variation, but ‘optionality’ in the presence or absence of a definite determiner usually correlates with other syntactic (e.g., subject vs. object) or semantic (e.g., genericity) differences. Again, this does not appear to be the case ASL (and possibly other sign languages). Both Zimmer & Patschke (1990) and MacLaughlin (1997) provide a straightforward (albeit typologically uncommon) explanation, proposing that the indexical point in ASL may function as a determiner, albeit an optional one. Such an analysis is problematic for reasons like those discussed in Section 10.3.1: determiners contribute significantly to the syntax and semantics of nominals and so are not expected to be optional. There is contemporary consensus that analyzing the indexical point in ASL as a determiner analogous to English the is likely incorrect.7 Instead, recent cross-​linguistic research converges on the proposal that indexical pointing may serve a demonstrative function in sign language, an approach that clearly comports well with its potential origins in gestural deixis. Moreover, because demonstratives are a common source for pronominals across languages, such analyses also explain why the indexical point can occur alongside overt nominal material or as a standalone pronominal form. The demonstrative analysis of indexical pointing has been suggested for ASL by Abner (2012) and pursued in depth by Koulidobrova & Lillo-​Martin (2016).8 Certain indexical points have also been classified as demonstratives in NGT (Vink 2004; Brunelli 2011), Taiwan Sign Language (TSL, Lai 2005), LIS (Branchini 2006; Bertone 2009; Brunelli 2006, 2011), and RSL (Kimmelman 2017). For Polish Sign Language (PJM), Rutkowski et al. (2015) remain neutral as to whether certain instances of the indexical point have a demonstrative or determiner function. Finally, echoing the multi-​functionality proposed for other sign languages (e.g., MacLaughlin (1997) for ASL, Vink (2004) for NGT), Nuhbalaoglu & Özsoy (2014) argue that the indexical point may serve several functions in TİD, one of which they classify as demonstrative.

220

221

Determiner phrases

In defense of the demonstrative analysis, Koulidobrova & Lillo-​Martin (2016) show that nominal constituents marked with I X in ASL can only be interpreted referentially, not quantificationally. This diagnostic, illustrated in (8), comes from Nowak (2013), who observes that an English nominal constituent with a definite determiner (the guy in the red shirt) can have both a referential and quantificational interpretation, whereas using a demonstrative (that guy in the red shirt) yields only the referential interpretation. Koulidobrova and Lillo-​Martin find that nominal constituents marked with I X in ASL behave like the latter. Thus, the subject nominal constituent in (8)  can only be interpreted as referring to a particular individual  –​one who consistently wins. It cannot have a co-​variant quantificational interpretation in which the winner is consistently whoever is wearing a red shirt (‘a-​’ again signals spatial localization; ‘rc’ = relative clause).    

        rc

(8) {a-​I X PE RSON /​ a-​I X} RED S H IRT S ELF TEND WIN ‘IX person /​IX in the red shirt tends to win’ = referential/​*quantificational                               (ASL, Koulidobrova & Lillo-​Martin 2016: 237) Koulidobrova and Lillo-​Martin further observe that the indexical point cannot co-​occur with nominals referring to global uniques, such as the capital of France in (9).    

(9)

  t        

    wh

F RANC E ( *IX) CAPITAL WH AT

‘What is the capital of France?’    (ASL, Koulidobrova & Lillo-​Martin 2016: 234) Here, too, the contrast parallels a distinction between determiners and demonstratives. The English determiner the is grammatical (and required) in this context, whereas its demonstrative counterpart is disallowed: *What is that capital of France? Interestingly, global uniques also fall into the class of ‘intrinsically definite’ nouns that are incompatible with the definiteness suffix in Hausa (Jaggar 2001). As with indexical pointing in sign language, the definiteness suffix in Hausa is superficially ‘optional’, though Jaggar (2001) and others have correlated its presence with the discourse familiarity of the referent. Discourse sensitivity of a superficially ‘optional’ determiner has also been proposed for Old French by Mathieu (2009). As in documented sign languages, bare nouns in Old French permitted both definite and indefinite interpretations. Overt presence of the ‘optional’ definite determiner was influenced by metrical restrictions in the phonology or by focalization of the nominal constituent. Discourse sensitivity explanations for the distribution of IX also appear in the sign language literature, though the dimension of discourse argued to be relevant is not familiarity or focalization but the kind of mental space invoked in the signer’s utterance. Liddell (1994, 1995) proposes that sign language utterances can be conceptualized within real, surrogate, or token space. Real space is the signer’s mental conception of the actual, physically present world. Surrogate space is like real space, but it represents a non-​local (physically and temporally) world –​that is, another situation. Token space is like surrogate space, but it represents the non-local world as a confined model within a smaller area

221

222

Natasha Abner

of signing space. That is, token space is like a signed diorama, whereas surrogate space is exactly like real space except for its descriptive content is non-​actual. The mental space in which an utterance is conceptualized may affect the structure of nominal constituents and the distribution of IX. Bare nouns are common in surrogate space descriptions in HKSL (Tang & Sze 2002) and Swedish Sign Language (Ahlgren & Bergman 1994) but not as common in token space descriptions in HKSL. Moreover, Tang and Sze found that the use of indexical point with a definite referent was preferred in surrogate space as compared to definite nominals in real space. Thus, the spatial reference system of sign languages may give rise to modality-​specific effects in how conceptualizations are linguistically represented. Much work remains to be done, especially cross-​linguistically, in documenting and analyzing these and other fine-​grained syntactic and semantic properties of nominal constituents in sign languages. Nevertheless, two conclusions can be tentatively drawn based on the findings thus far. First, though bare nominals frequently appear, sign languages do not behave like article-​less/​NP-​only languages of the Slavic, Chinese, or Japanese type. Second, the indexical point, when present, is not equivalent to an English-​ like the determiner; it patterns like a demonstrative and its distribution may be mediated by discourse factors.

10.4  Word order patterns Documentation and analysis of word order patterns has been of special interest in linguistics since at least Greenberg (1963). Greenberg observed that surface word order exhibited strong implicational and correlational patterns within and across languages. As for the order within nominal constituents, Greenberg proposed a number of ‘universals’ concerning the position of elements relative to the noun. One universal that has received significant attention in the theoretical literature, including in sign language research, is Universal 20, which concerns the order of demonstratives, numerals, adjectives, and the noun. Cinque (2005) and Abels & Neeleman (2012) have developed predictive, generative accounts of the word order patterns of these nominal components, showing how surface patterns may reveal underlying structural properties of the syntactic system. Both proposals argue that the observed word order patterns arise from the interaction of the base merge hierarchy, Demonstrative > Numeral > Adjective > Noun, and restrictions on the movement operations that derive orders other than those created by merge alone. In Cinque’s system, for example, Demonstrative-​Numeral-​Noun-​Adjective order may be derived from the base merge structure by moving the noun to a functional projection between the numeral and the adjective. This movement is licit and the order it generates is widely attested. Generation of Demonstrative-​Adjective-​Numeral-​Noun order, however, would require moving the adjective to a functional projection between the demonstrative and the numeral. This is disallowed because movement operations must target a projection containing the noun. Notably, Demonstrative-​Adjective-​Numeral-​Noun order is unattested. Thus, such an approach uses constraints on merge and move to provide an explanatory account of the word order patterns that do and do not emerge in human language, including word order patterns that were not part of Greenberg’s original generalization but are attested. Because word order patterns arise as a consequence of constraints on the computational engine, it is unsurprising that research on sign languages reveals that they, too, exhibit these patterns. Moreover, as in many spoken languages, research on 222

223

Determiner phrases

word order within nominal constituents of sign languages has revealed that they exhibit language-​ internal word order variability. These language-​ internal variants obey the aforementioned restrictions and, in some cases, track the interpretation of the nominal. Comparing LIS and NGT, Brunelli (2011) found that LIS allows only Noun-​ Demonstrative order whereas NGT allows both Demonstrative-​ Noun and Noun-​Demonstrative order. NGT also allows adjectives and numerals to be in both prenominal and postnominal position, whereas LIS, Brunelli observes, exhibits consistent postnominal order of adjectives and numerals (see also Bertone (2007) for LIS). Using corpus data in addition to traditional fieldwork, however, Mantovan (2015; see also Mantovan & Geraci (2017)) documents more variability in LIS than previously reported (see Rutkowski et al. (2015) for a corpus-​based analysis of word order patterns in PJM). Rather, Mantovan found that 36% of modifiers occurred in a prenominal position. Prenominal and postnominal placement of an adjectival modifier in LIS are illustrated in (10). (10) a.

T RY GO- ​F OR-​I T, FIRS T S TEP [BEAUTIFUL EXPERIENCE] GO-​A HEAD

‘Give it a try, go for it. After the first step, you’ll see it’s a beautiful experience and you’ll go ahead with it!’ b.

L E AV E PARIS G O-​T O [EXPERIENCE BEAUTIFUL]

‘I left and went to Paris, it was a beautiful experience.’                               (LIS, Mantovan 2015: 144–​145) Mantovan found that variation between prenominal and postnominal position of modifiers was modulated by both linguistic and sociolinguistic factors. Supplementing the corpus data with consultant intuitions, for example, Mantovan showed that the position of the cardinal numeral modifier relative to the noun correlates with definiteness (see Abner (2012) for similar patterns with respect to cardinal and possessive order in ASL). Postnominal cardinal numerals (CHILD TWO) are compatible with both indefinite (11a) and definite (11b) interpretations whereas prenominal numeral modifiers (TWO CHILD) may only be interpreted indefinitely. That is, *TWO CHILD is ‘ungrammatical’ in (11b) because it cannot be interpreted as a definite. Given a base merge order of Numeral > Noun, this means that movement of the head noun over the numeral is obligatory in the definite nominal, a pattern that is reminiscent of the N-​to-​D raising analysis of definiteness proposed for a number of other languages (see, for example, Ritter (1988), Borer (1994), and Siloni (1994) for Hebrew; Longobardi (1994) for Italian; Delsing (1993) for Scandinavian languages; Mohammad (1988) for Arabic; and Duffield (1995) for Celtic). Indeed, Mantovan’s account of the alternation in (11) is in the spirit of N-​to-​D raising analyses, albeit within a Cinquean framework:  the head nominal in (11b) obligatorily moves to a functional agreement projection just below the DP in order to check a definiteness feature. The analysis thus provides indirect evidence for the argument made in Section 10.3.1 that nominal constituents in sign languages project a DP layer, even when no determiner is present. (11) a.

T WO CH ILD, CH ILD TWO

‘two children’ [indefinite] b. * T WO CH ILD, CH ILD TWO ‘the two children’ [definite]          223

(LIS, Mantovan 2015: 210)

224

Natasha Abner

With respect to sociolinguistic factors, Mantovan found that postnominal modifiers were more likely to be used by signers born either before 1945 or after 1965 and were less frequently produced by signers with Deaf parents. Because Brunelli compares LIS and NGT, and Mantovan compares variability within LIS, these approaches show that a restrictive, generative approach to word order within nominal constituents can be extended to intra-​and inter-​linguistic variation in sign languages. A comparable conclusion is drawn by Zhang (2007) based on intra-​linguistic variation in TSL, as documented by Lai (2005). With respect to the order of the four elements in Universal 20, TSL exhibits four possible orders.9 Each of the possible orders is illustrated in (12). (12) a. Demonstrative-​Numeral-​Adjective-​Noun [IXdet.pl FIVE NAUGHTY BOY] IX pro1s BELONG-​T O STUDENT b. Demonstrative-​Adjective-​Noun-​Numeral [IXdet.pl NAUGHTY BOY FIVE] BELON G -​T O STUDENT c. Demonstrative-​Numeral-​Noun-​Adjective [IXdet.pl FIVE BOY NAUGHTY] IX pro1s BELONG-​T O STUDENT d. Demonstrative-​Noun-​Adjective-​Numeral [IXdet.pl BOY NAUGHTY FIVE] IX pro1s BELONG-​T O STUDENT ‘These five naughty boys are my students.’                       (TSL, Lai 2005, cited in Zhang 2007: 65) With respect to the TSL data in (12), Zhang’s theoretical claims are threefold. One, as also evidenced in the previous studies of LIS and NGT, existing theoretical proposals are compatible with and appropriate for sign language data. Two, as with the analysis of variation attested in LIS, these theoretical models are best framed as explanatory accounts of both inter-​and intra-​linguistic variation. Three, data from sign languages may help advance theoretical analyses or mediate theoretical disagreements. To this end, Zhang offers the TSL data as a potential mediator between heretofore unmentioned differences in the proposals of Cinque (2005) and Abels & Neeleman (2012). These proposals do impose identical restrictions on movement operations and assume the same underlying base merge hierarchy (Demonstrative > Numeral > Adjective > Noun). However, they differ in how the base merge structures may be linearized. Cinque assumes a fully antisymmetric system with only leftward movement and all elements merging to the left of the (projection containing the) noun. Thus, the base merge structure outputs an order that matches hierarchy: Demonstrative-​Numeral-​Adjective-​Noun. Abels and Neeleman, however, impose the antisymmetry restriction only on movement and allow base merge of elements to both the left and the right of the (projection containing the) noun. Both approaches are equivalent in generative capacity in that they correctly exclude unattested word orders and are able to derive all of the attested orders. However, they differ in how the attested orders are generated. As explained at the outset of this section, Cinque’s system derives Demonstrative-​Numeral-​Noun-​Adjective order, such as (12c) in TSL, via movement of the noun over the adjective. Abels and Neeleman’s system, however, permits this order to be derived by base merging the adjective to the right (vs. left) of the noun. Because TSL systematically permits both adjectives and numerals to occur on either side of the noun, Zhang argues from the point of theoretical parsimony that the language provides additional support for Abels and Neeleman’s symmetric base merge approach.10 224

225

Determiner phrases

Returning to the other key takeaways of Zhang’s analysis, what remains to be carefully studied  –​in sign language but also in spoken language  –​is the extent to which intra-​linguistic word order variation is semantically motivated. In addition to the definiteness distinctions noted above, evidence of the semantic underpinnings of word order variation within nominal constituents is provided by Padden (1988) and MacLaughlin (1997), who argue that postnominal (vs. prenominal) adjectives behave predicatively, not attributively. The future of this line of research also likely lies in an approach like that taken by Mantovan, wherein qualitative and quantitative measures using corpus data as well as data from traditional fieldwork techniques help tease apart subtle structural and semantic differences in word order.

10.5  Possessives One of the hallmarks of linguistic structure is the expression of relational information between constituents. In the nominal domain, the primary mechanism for expressing relational information is possession.11 One dimension of investigation for possessive expressions is the relationship between the possessee and possessor. Here, a standard contrast is that between inalienable (13a) and alienable (13b) possession, as illustrated by the examples from British Sign Language (BSL) in (13).12 Though sign languages, like spoken languages, often treat relational nouns (e.g., LEG ) as inalienably possessed objects, there are also special signs and sign constructions for expressing ‘possession’ with relational nouns. In Catalan Sign Language (LSC, Quer et al. 2008), for example, there exists a special possessive marker, glossed as KINSHIP in (14). Possessives may also be used to encode bleached or extended semantic relations that are not possessive in the narrow sense (15). (13) a. b.

PRO-​1 LEG

‘my leg’ POSS- ​non1 BOOK ‘your book’       

                       

(14)

(BSL, Cormier & Fenlon 2009: 406) t

SIST E R-​I N-​L AW K IN S H IP DAU G H TER OPERATE-​O N-​N OSE

‘(My) sister-​in-​law’s daughter had her nose operated.’ (LSC, Quer et al. 2008: 44) (15)

iB RUNO

POS S i BOOK

‘Bruno’s book [that he is the author of]’

(ASL)

As with the word order patterns discussed above, data from possessives reveal structural parallels across signed and spoken languages. For example, BSL is shown to structurally distinguish between possessive relations of different types, as illustrated by the use of the indexical possessive marker (PRO , -​handshape) with the inalienable possessive in (13a) versus the possessive pronoun (POS S, -​handshape) with the alienable possessive in (13b); similar observations can be made with respect to the KINSHIP marker in LSC in (14). Indexical possessives like that in (13a) have also been documented in LIS and NGT (Brunelli 2011) as well as TİD (Nuhbalaoglu & Özsoy 2014), though in the latter case the sign glossed as a possessive indexical is produced with a -​handshape. It should be noted, however, that when the possessive is marked by an indexical pointing sign, it is 225

226

Natasha Abner

unclear if this is truly a possessive indexical or simply an indexical pronoun standing in the possessor position of a juxtaposition POSSESSOR-​P OSSESSEE structure. Indeed, this is the analysis assumed by Abner (2012) for ‘possessive’ uses of the indexical in ASL. In this and subsequent work (Abner 2013), she also shows that the possessive pronoun analysis of the sign commonly glossed as POSS is incorrect for ASL. Based on morphosyntactic evidence, including the ability to host transitive spatial agreement, she argues that the P OSS sign is a verbal predicate. As with other predicates, POSS may be introduced as a (reduced) relative clause modifier within the DP. Though the surface pattern yielded may be one in which POS S looks like a possessive pronominal, she proposes that the literal translation of a possessive like (15) is more along the lines of book belonging to Bruno. Patterns such as these relate to how possessive meaning is structurally encoded within the DP, a topic that has been widely investigated across sign languages and reported on in contributions to Zeshan & Perniss (2008). Though many of the contributions to the Zeshan and Perniss volume address the semantic restrictions on possessive structures as well as the nature (and origins) of the possessive markers, they are primarily descriptive in nature. Having achieved this valuable and necessary step of documentation, the task of future research will be to both inventory more fine-​ grained patterns in the syntax and semantics of possession and to develop theoretical analyses of how these patterns arise.

10.6  Conclusion The overview of theoretical approaches to DPs in sign languages presented here is by no means exhaustive, but it has touched on some fundamental issues in the analysis of DPs, in sign languages and in human languages more generally. The discussion has addressed the lexical core of DPs (the noun), the functional periphery of the DP, order and semantic relations among components of nominal and constituents, and whether these nominal constituents are even associated with a DP structure. In each of the areas overviewed, it has been shown that sign languages do not exhibit structural patterns that are qualitatively different from those documented in spoken languages and, moreover, that similar linguistic analyses may be used for signed or spoken data. The areas discussed, however, are also areas where the study of sign languages may offer insight into the nature and structure of human language. For example, the distribution of indexical points may be a window into the possible semantic and pragmatic functions of nominal reference markers while the morphophonological properties of sign languages themselves may shine a light on the structure of derived nouns.

Notes 1 The present discussion sets aside the related theoretical question of whether ‘simplex nouns’ (and other ‘lexical’ items) are themselves derived from acategorial roots (see Marantz (1997) and Borer (2004), among others). The crux of the claim made here is that there is a structural distinction between these so-​called ‘simplex’ and ‘derived’ nouns and that this distinction arises because the latter are systematically ‘built from’ a verbal base. This distinction remains if the present observations are cast in light of this alternative analysis of roots and categorization. 2 In those sign languages that use repetition as a marker of lexical category, we still observe movement reduction. This reduced movement may then be repeated for morphological purposes. As Abner (2017) notes, nominalization is a common function of reduplication cross-​linguistically. Repetition of these reduced movements may also be independently motivated by minimum word constraints (Brentari 1998; Geraci 2009).

226

227

Determiner phrases 3 The examples in (6) are not the only possibilities under Neidle et al.’s analysis. Their analysis suggests that non-​manual agreement marking is not obligatory and argues that alternative spreading patterns of the non-​manual across the DP are possible. For reasons of space, I exemplify only the configurations in (6), which are sufficient to illustrate how the two markers associate with different components of the DP. 4 Universality of indexical pointing does not entail that (indexical) pointing gestures are not culturally transmitted (i.e., learned) nor does it mean that these gestures are formationally or functionally identical across cultures. Moreover, the universal presence of indexical pointing does not rule out the existence (or predominance) of other types of pointing gestures within a community. Though the prototypical pointing gesture in Western cultures is produced with the index finger, some cultures rely heavily on non-​manual forms of pointing (Cooperrider et al. 2018). Little is known, however, about the grammaticalization of such non-​manual pointing forms in sign languages. For a thorough overview of pointing, including a discussion of pointing in sign languages, please see contributions to Kita (2003). 5 The example in (7) uses ‘they’ as a gender-​inclusive translation of the third person singular I X in ASL. 6 In ASL, MacLaughlin (1997) reports that nominals are obligatorily interpreted as definite only if the pointing sign occurs prenominally, as is true of I X k in (7). Zimmer & Patschke (1990), however, observe this effect in their data with both prenominal and postnominal points as well as with pointing signs produced simultaneously with the noun. For reasons of space, the present discussion will not address this additional dimension of syntactic and semantic complexity. 7 In recent work, Irani (2019) proposes that the behavior of the indexical point can be reckoned with that of determiners cross-linguistically if a distinction between strong and weak definites is taken into consideration. Irani proposes that nominal constituents marked by an indexical point (and a spatial reference locus) are strong definites in the sense of Schwartz (2009). 8 ASL also has a sign glossed as THAT . This sign has a number of morphophonological modulations, at least some of which co-​vary with morphosyntactic function in a given usage (Liddell 1980). Moreover, the co-​existence of indexical pointing signs and signs that receive a more traditional demonstrative gloss seems common across sign languages. This is not inherently problematic for the demonstrative analysis of indexical pointing; human languages frequently include rich paradigms of demonstratives that differ in their syntax and semantics. Unfortunately, the existing empirical and theoretical literature is not sufficient to map the landscape of demonstratives in ASL or other sign languages. Future research into these paradigms, however, should take care not to mistake the properties of the English gloss (that) for those of the ASL sign. 9 Similar intra-​linguistic word order variation is also documented in spoken languages. For example, Cinque (2005:  324, footnote 29)  observes that Nawdm (Togo) allows both Noun-​ Adjective-​Numeral-Demonstrative (this) and Demonstrative (that)-​Noun-​Adjective-​Numeral order, depending on which demonstrative is used. 10 Both Cinque’s and Abels and Neeleman’s analyses are also brought to bear on constituent order data in TİD by Nuhbalaoglu & Özsoy (2014). 11 Setting aside the broader question of the status and origin of ‘arguments’ in the nominal domain (Kayne 2009), ASL has been argued to lack argument-​taking nominals (e.g., claim, MacLaughlin 1997); thus, possession would constitute the only true relational construction in the nominal domain. 12 The glossing in (13) reflects Meier’s (1990) proposal that sign languages make use of a two-​ person (vs. three-​person) system: P RO -​1 in (13a) is a first-​person pronoun, P O S S - n ​ on1 in (13b) is a non-​first person possessive marker.

References Abels, Klaus & Ad Neeleman. 2012. Linear asymmetries and the LCA. Syntax 15(1). 25–​74. Abner, Natasha. 2012. There once was a verb: The predicative core of possessive and nominalization structures in American Sign Language. Los Angeles, CA: UCLA PhD dissertation. Abner, Natasha. 2013. Gettin’ together a P O SS e: The primacy of predication in ASL possessives. Sign Language & Linguistics 16(2). 125–​156.

227

228

Natasha Abner Abner, Natasha. 2017. What you see is what you get.get: Surface transparency and ambiguity of nominalizing reduplication in American Sign Language. Syntax 20(4). 317–​352. Abner, Natasha, Molly Flaherty, Katelyn Stangl, Marie Coppola, Diane Brentari, & Susan Goldin-​ Meadow. 2019. The noun-​verb distinction in established and emergent sign systems. Language 95(2). 230–​267. Abney, Steven P. 1987. The English noun phrase in its sentential aspects. Cambridge, MA: MIT PhD dissertation. Ahlgren, Inger & Brita Bergman. 1994. Reference in narratives. In Inger Ahlgren, Brita Bergman, & Mary Brennan (eds.), Perspectives on sign language structure: Papers from the 5th International Symposium on Sign Language Research, 29–​36. Durham, UK:  International Sign Linguistics Association. Alibašić Ciciliani, Tamara & Ronnie B. Wilbur. 2006. Pronominal system in Croatian Sign Language. Sign Language & Linguistics 9(12). 95–​132. Bernath, Jeffrey L. 2009. Pinning down articles in American Sign Language. Storrs, CT: University of Connecticut manuscript. Bertone, Carmela. 2007. La struttura del sintagma determinante nella Lingua dei Segni Italiana (LIS). Venice: Ca’ Foscari University PhD dissertation. Bertone, Carmela. 2009. The syntax of noun modification in Italian Sign language (LIS). University of Venice Working Papers in Linguistics 19. Borer, Hagit. 1994. Deconstructing the construct. Amherst, MA:  University of Massachusetts manuscript. Borer, Hagit. 2004. The grammar machine. In Artemis Alexiadou, Elena Anagnostopoulou, & Martin Everaert (eds.) The unaccusativity puzzle: Explorations of the syntax-​lexicon interface, 288–​331. Oxford: Oxford University Press. Bošković, Željko. 2008. What will you have, DP or NP? In Emily Elfner & Martin Walkow (eds.), Proceedings of NELS 37, 101–​114. Amherst, MA: Graduate Linguistic Student Association. Bošković, Željko & Jon Gajewski. 2011. Semantic correlates of the DP/​NP parameter. In Suzi Lima, Kevin Mullin, & Brian Smith (eds.), Proceedings of NELS 39, 121–​ 134. Amherst, MA: Graduate Linguistic Student Association. Branchini, Chiara. 2006. On relativization and clefting in Italian Sign Language (LIS). Urbino: Università di Urbino PhD dissertation. Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA: MIT Press. Brunelli, Michele. 2006. The grammar of Italian Sign Language with a study about its restrictive relative clauses. Venice: Università Ca’ Foscari MA thesis. Brunelli, Michele. 2011. Antisymmetry and sign languages: A comparison between NGT and LIS. Amsterdam: University of Amsterdam PhD dissertation. Utrecht: LOT. Chierchia, Gennaro. 1998. Reference to kinds across language. Natural Language Semantics 6(4). 339–​405. Cinque, Guglielmo. 2005. Deriving Greenberg’s Universal 20 and its exceptions. Linguistic Inquiry 36(3). 315–​332. Cooperrider, Kensy, James Slotta, & Rafael Núñez. 2018. The preference for pointing with the hand is not universal. Cognitive Science 42(4). 1375–​1390. Coppola, Marie & Ann Senghas. 2010. The emergence of deixis in Nicaraguan signing. In Diane Brentari (ed.), Sign languages: A Cambridge language survey, 543–​569. Cambridge: Cambridge University Press. Cormier, Kearsy & Jordan Fenlon. 2009. Possession in the visual-​ gestural modality:  How possession is expressed in British Sign Language. In William B. McGregor (ed.), The expression of possession, 389–​422. Berlin: Mouton de Gruyter. Corver, Norbert. 1992. Left branch extraction. In Kimberly Broderick (ed.), Proceedings of NELS 22, 67–​84. Amherst, MA: Graduate Linguistic Student Association. Delsing, Lars-​Olof. 1993. The internal structure of noun phrases in the Scandinavian languages. Lund: University of Lund PhD dissertation. Duffield, Nigel. 1995. Particles and projections in Irish syntax. Boston, MA: Kluwer. Fukui, Naoki. 1988. Deriving the differences between English and Japanese: A case study in parametric syntax. English Linguistics 5. 249–​270. Geraci, Carlo. 2009. Epenthesis in Italian Sign Language. Sign Language & Linguistics 12(1). 3–​51.

228

229

Determiner phrases Goldin-​Meadow, Susan, Cynthia Butcher, Carolyn Mylander, & Mark Dodge. 1994. Nouns and verbs in a self-​styled gesture system: What’s in a name? Cognitive Psychology 27(3). 259–​319. Greenberg, Joseph H. 1963. Some universals of grammar with particular reference to the order of meaningful elements. In Joseph H. Greenberg (ed.), Universals of grammar, 73–​113. Cambridge, MA: MIT Press. Hendriks, Bernadet. 2008. Jordanian Sign Language: Aspects of grammar from a cross-​linguistic perspective. Amsterdam: University of Amsterdam PhD dissertation. Utrecht: LOT. Hoffmeister, Robert J. 1977. The acquisition of American Sign Language by deaf children of deaf parents: The development of demonstrative pronouns, locatives, and personal pronouns. Minneapolis, MN: University of Minnesota PhD dissertation. Hunger, Barbara. 2006. Noun/​verb pairs in Austrian Sign Language (ÖGS). Sign Language & Linguistics 9(1/​2).  71–​94. Hunsicker, Dea & Susan Goldin-​Meadow. 2012. Hierarchical structure in a self-​created communication system: Building nominal constituents in homesign. Language 88(4). 732–​763. Irani, Ava. 2019. On (in)definite expressions in American Sign Language. In Ana Aguilar-​Guevera, Julia Pozas Loyo, & Violeta Vázquez-​Rojas Maldonado (eds.), Definiteness across languages, 113–​151. Berlin: Language Science Press. Jaggar, Philip J. 2001. Hausa. Amsterdam: John Benjamins. Johnston, Trevor. 2001. Nouns and verbs in Australian Sign Language: An open and shut case? Journal of Deaf Studies and Deaf Education 6(4). 235–​257. Kayne, Richard S. 2009. Antisymmetry and the lexicon. Linguistic Variation Yearbook 8(1). 1–​31. Kimmelman, Vadim. 2009. Parts of speech in Russian Sign Language: The role of iconicity and economy. Sign Language & Linguistics 12(2). 161–​186. Kimmelman, Vadim. 2017. Quantifiers in Russian Sign Language. In Edward L. Keenan & Denis Paperno (eds.), Handbook of quantifiers in natural language, vol. 2, 803–​855. Dordrecht: Springer. Kita, Sotaro (ed.). 2003. Pointing:  Where language, culture, and cognition meet. Mahwah, NJ: Lawrence Erlbaum. Koulidobrova, Elena. 2017. Elide me bare: Null arguments in American Sign Language. Natural Language & Linguistic Theory 35(2). 397–​446. Koulidobrova, Elena & Diane Lillo-​Martin. 2016. A ‘point’ of inquiry: The case of the (non-​)pronominal I X in ASL. In Patrick Grosz & Pritty Patel-​Grosz (eds.), Impact of pronominal form on interpretations, 221–​250. Berlin: Mouton de Gruyter. Kubus, Okan. 2008. An analysis of Turkish Sign Language (TİD) phonology and morphology. Ankara: Middle East Technical University MA thesis. Lai, Yu-​ting. 2005. Noun phrases in Taiwan Sign Language. Minxiong Township, Taiwan: National Chung Cheng University MA thesis. Liddell, Scott K. 1980. American Sign Language syntax. The Hague: Mouton. Liddell, Scott K. 1994. Tokens and surrogates. In Inger Ahlgren, Brita Bergman, & Mary Brennan (eds.), Perspectives on sign language structure: Papers from the 5th International Symposium on Sign Language Research, 105–​119. Durham: International Sign Linguistics Association. Liddell, Scott K. 1995. Real, surrogate and token space:  Grammatical consequences in ASL. In Karen Emmorey & Judy Reilly (eds.), Language, gesture, and space, 19–​ 42. Hillsdale, NJ: Lawrence Erlbaum. Longobardi, Giuseppe. 1994. Reference and proper names: A theory of N-​movement in syntax and logical form. Linguistic Inquiry 25(4). 609–​665. Longobardi, Giuseppe. 2008. Reference to individuals, person, and the variety of mapping parameters. In Henrik Høeg Müller & Alex Klinge (eds.), Essays on nominal determination: From morphology to discourse management, 189–​211. Amsterdam: John Benjamins. MacLaughlin, Dawn. 1997. The structure of determiner phrases: Evidence from American Sign Language. Boston: Boston University PhD dissertation. Mantovan, Lara. 2015. Nominal modification in Italian Sign Language (LIS). Venice: Ca’ Foscari University PhD dissertation. Mantovan, Lara & Carlo Geraci. 2017. The syntax of nominal modification in Italian Sign Language (LIS). Sign Language & Linguistics 20(2). 183–​220. Marantz, Alex. 1997. No escape from syntax:  Don’t try morphological analysis in the privacy of your own lexicon. In Alexis Dimitriadis, Laura Siegel, Clarissa Surek-​Clark & Alexander Williams (eds.) University of Pennsylvania Working Papers in Linguistics 4(2), 201–​225.

229

230

Natasha Abner Mathieu, Eric. 2009. From local blocking to cyclic Agree: The role and meaning of determiners in the history of French. In Jila Ghomeshi, Ileana Paul, & Martina Wiltschko (eds.), Determiners: Variation and universals, 123–​157. Amsterdam: John Benjamins. Meier, Richard P. 1990. Person deixis in American sign language. In Susan D. Fischer & Patricia Siple (eds.), Theoretical issues in sign language research 1, 175–​190. Chicago, IL: University of Chicago Press. Meir, Irit & Wendy Sandler. 2007. A language in space:  The story of Israeli Sign Language. New York: Lawrence Erlbaum. Mohammad, Mohammad A. 1988. On the parallelism between IP and DP. In Hagit Borer (ed.), Proceedings of WCCFL 7, 241–​254. Stanford, CA: CSLI. Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, & Robert G. Lee. 2000. The syntax of American Sign Language: Functional categories and hierarchical structure. Cambridge, MA: MIT Press. Nowak, Ethan. 2013. Things a semantics for demonstratives should do. Berkeley, CA: University of California, Berkeley manuscript. Nuhbalaoğlu, Derya & A. Sumru Özsoy. 2014. Linearization in TİD NPs. Paper presented at Formal and Experimental Advances in Sign Language Theory (FEAST 3). Venice: Ca’ Foscari University. Padden, Carol. 1988. Interaction of morphology and syntax in American Sign Language. New York: Garland Publishing. Pfau, Roland. 2011. A point well taken:  On the typology and diachrony of pointing. In Donna Jo Napoli & Gaurav Mathur (eds.), Deaf around the world. The impact of language, 144–​163. Oxford: Oxford University Press. Pfau, Roland & Markus Steinbach. 2011. Grammaticalization in sign languages. In Bernd Heine & Heiko Narrog (eds.), The Oxford handbook of grammaticalization, 683–​695. Oxford: Oxford University Press. Pizzuto, Elena & Serena Corazza. 1996. Noun morphology in Italian Sign Language (LIS). Lingua 98(1–​3). 169–​196. Quadros, Ronice M. de. 1999. Phrase structure of Brazilian Sign Language. Porto Alegre: Pontifical Catholic University of Rio Grande do Sul PhD dissertation. Quer, Josep, Delfina Aliaga, Gemma Barberà, Josep M. Boronat, Santiago Frigola, Joan M. Gil, Pilar Iglesias, Marina Martinez, & Eva M. Rondoni. 2008. Structures of possession and existence in Catalan Sign Language. In Ulrike Zeshan & Pamela Perniss (ed.), Possessive and existential constructions in sign languages, 33–​53. Nijmegen: Ishara Press. Ritter, Elizabeth. 1988. A head-​movement approach to construct-​state noun phrases. Linguistics 26(6). 909–​930. Rutkowski, Paweł, Anna Kuder, Małgorzata Czajkowska-​Kisil, & Joanna Łacheta. 2015. The structure of nominal constructions in Polish Sign Language (PJM): A corpus-​based study. Studies in Polish Linguistics 10(1). 1–​15. Schreurs, Linda 2006. The distinction between formally and semantically related noun-​verb pairs in Sign Language of the Netherlands (NGT). Amsterdam: University of Amsterdam MA thesis. Schwarz, Florian. 2009. Two types of definites in natural language. Amherst, MA: University of Massachusetts PhD dissertation. Siloni, Tal. 1994. Noun phrases and nominalizations. Geneva:  University of Geneva PhD dissertation. Steinbach, Markus. 2012. Plurality. In Roland Pfau, Markus Steinbach, & Bencie Woll (ed.), Sign language: An international handbook, 112–​136. Berlin: De Gruyter Mouton. Supalla, Ted & Elissa L. Newport. 1978. How many seats in a chair? The derivation of nouns and verbs in American Sign Language. In Patricia Siple (ed.) Understanding language through sign language research, 91–​132. New York, NY: Academic Press. Tang, Gladys & Felix Y. B. Sze. 2002. Nominal expressions in Hong Kong Sign Language: Does modality make a difference? In Richard Meier, Kearsy Cormier, & David Quinto-​Pozos (eds.), Modality and structure in signed and spoken languages, 296–​ 319. Cambridge:  Cambridge University Press. Thompson, Robin, Karen Emmorey, & Robert Kluender. 2006. The relationship between eye gaze and verb agreement in American Sign Language: An eye-​tracking study. Natural Language & Linguistic Theory 24(2). 571–​604.

230

231

Determiner phrases Vink, Marlies. 2004. DP in de Nederlandse Gebarentaal  –​een crosslinguistische vergelijking. Amsterdam: University of Amsterdam MA thesis. Wilbur, Ronnie B. 1979. American Sign Language and sign systems:  Research and applications. Baltimore, MD: University Park Press. Wilbur, Ronnie B. 2003. Representations of telicity in ASL. In Jonnathan E. Cihlar, Amy L. Franklin, David W. Kaiser, & Irene Kimbara (eds.), Proceedings of CLS 39, 354–​368. Chicago, IL: Chicago Linguistics Society. Zeshan, Ulrike. 2003. Indo-​ Pakistani Sign Language grammar:  A typological outline. Sign Language Studies 3(2). 157–​212. Zeshan, Ulrike & Pamela Perniss (eds.). 2008. Possessive and existential constructions in Kata Kolok. In Ulrike Zeshan & Pamela Perniss (ed.), Possessive and existential constructions in sign languages, 125–​149. Nijmegen: Ishara Press. Zhang, Niina Ning. 2007. Universal 20 and Taiwan Sign Language. Sign Language & Linguistics 10.  55–​81. Zimmer, Jane & Cynthia Patschke. 1990. A class of determiners in ASL. In Ceil Lucas (ed.), Sign language research: Theoretical issues, 201–​210. Washington, DC: Gallaudet University Press.

231

232

11 CONTENT INTERROGATIVES Theoretical and experimental perspectives Meltem Kelepir

11.1  Introduction One of the most studied constructions in sign languages, and possibly one of the constructions that have raised the most controversy in sign language linguistics literature, is content interrogatives. As the title suggests, this chapter aims to provide a survey of both theoretical and experimental questions that have been raised for this construction in sign languages. The first main section focuses on the theoretical issues regarding the syntactic analyses of the distribution of the interrogative (i.e., wh-​) signs (henceforth, int-​signs) and the functions of the non-​manual markers in main and embedded content interrogatives. The second main section presents the experimental studies that investigate the acquisition and processing of simplex content interrogatives.1

11.2  Theoretical perspectives Perhaps the most controversial issue in the theoretical discussions of content interrogatives in sign languages has been the landing site of a moved int-​sign and, related to that, the direction of the movement. Extensive typological and theoretical work on spoken languages has shown that, very roughly, languages can be categorized as wh-​movement languages and wh-​in-​situ languages (putting aside a number of sub-​categories for the moment) (Cheng 1991; i.a.). In an overwhelming majority of wh-​movement languages studied so far, the interrogative word appears in the sentence-​initial position, which has led generative linguists to argue that wh-​movement is leftward to a left-​branching [Spec, CP] (Chomsky 1977, 1986; Rizzi 1990; i.a.). Generally, it is assumed that this movement is triggered by the feature checking requirements of the [+wh] feature of both the interrogative word and C, the head of CP. In wh-​in-​situ languages, these features are checked without triggering overt movement of the entire wh-​phrase (see Cheng (2009) and Watanabe (2001) for various analyses of wh-​in-​situ). Sign languages present a puzzle since in the majority of sign languages studied so far, it is the sentence-​final, not the sentence-​initial position, that is either the only possible position (in addition to in situ) or the most unmarked position for the int-​sign (Zeshan 2004, 2006; Cecchetto 2012). 232

233

Content interrogatives

This observation naturally leads to the following questions:  why do sign languages differ from spoken languages? Do they constitute a typological family based on modality in this respect? Do they show that [Spec, CP] (and specifiers in general, for that matter) can be on the right as well? If so, why does modality of language determine the position of the specifier of a category? Does this property correlate with other properties determined by modality? After all, syntactic structure and operations should be blind to whether the sentences are pronounced with sounds or signs. Can this be an apparent difference between spoken and sign languages? If so, can the distribution of int-​signs be explained without having to treat sign languages as typological exceptions? Initial works on content interrogatives in the 1990s were triggered by these questions, and their proposals were mainly based on data from American Sign Language (ASL). Work on other sign languages (as well as ASL) further showed that many sign languages display properties in content interrogatives that are rarer in spoken languages such as preference for s(entence)-​final position of int-​signs, as in the Italian Sign Language (LIS) example (1a), and int-​sign doubling, as in the Brazilian Sign Language (Libras) example (1b). However, at a more micro-​typological level they also differ from each other in terms of the restrictions on the distribution of int-​signs, the interpretation of different word orders, and the function and spreading domain of non-​manual markers.      

(1)

a.

wh

GIANN I EAT WH AT

‘What does Gianni eat?’ b.

(LIS, Cecchetto et al. 2009: 287–​290)

J OHN SEE WH O YES TERDAY WH O

‘Who exactly did John see yesterday?’               (Libras, adapted from Nunes & Quadros 2006: 463) The following subsections provide a survey of different proposals related to four main theoretical issues: (i) controversy over the syntactic position of the int-​signs and the direction of the wh-​movement, (ii) clause-​typing of interrogatives, (iii) the form and functions of non-​manual marking in content interrogatives, (iv) analyses of multiple wh-​questions, and (v) analyses of embedded content interrogatives and the implications of the findings for main content interrogatives. Owing to space limitations, certain constructions that contain int-​ signs are not discussed in detail in this chapter but referred to in the context of the discussion of other related constructions, when necessary. These are wh-​cleft constructions (see Section 11.2.1.8 and Wilbur 1996; Caponigro & Davidson 2011; i.a.), echo questions, rhetorical questions, and tag questions (see Baker-​Shenk 1983; Aarons et  al. 1992; Wilbur 1996; Hoza et al. 1997; Neidle et al. 2000; i.a.).

11.2.1  Positions of the interrogative signs and the leftward/​rightward controversy It has been observed that int-​signs can occur in s-​initial, s-​final, and in situ positions, and in doubling constructions where one of the copies is s-​final and the other can be in situ or s-​initial, depending on the language (Zeshan 2004, 2006; i.a.). However, sign languages differ from each other in the possible syntactic positions for the int-​signs and in which position they treat as the most unmarked one (Cecchetto 2012: 14f.). 233

234

Meltem Kelepir

The first works on this topic focus on ASL, and ASL remains the most extensively studied sign language in this respect (see Fischer (2006) for a comprehensive description of ASL questions). The debate on the position of int-​signs in ASL started in the 1990s with Lillo-​Martin (1990), Lillo-​Martin & Fischer (1992), and Petronio (1993) proposing a leftward wh-​movement analysis. This proposal was countered initially in Aarons et al. (1992), where the authors argued for a rightward wh-​movement analysis and proposed a structure for ASL where [Spec, CP] is on the right. I will use the acronyms LMA and RMA for the Leftward Movement Approach and Rightward Movement Approach, respectively. Bouchard & Dubuisson (1995), on the other hand, argue against any movement analysis (see Section 11.2.1.9). The disagreement between the LMA and RMA is two-​fold: one aspect concerns the distributional possibilities of int-​signs and the acceptability judgments of the language consultants of the researchers. The other aspect concerns the analysis of the syntactic positions of the int-​signs. As more and more work was done over the years, researchers made modifications on their reports on the judgments they received from their consultants, leading also to slight modifications of their generalizations. Petronio & Lillo-​Martin (1997) and Neidle et  al. (2000) are the main resources of discussion in this chapter for the LMA and RMA analyses, respectively, since they contain not only the updated information of signer judgments but also the arguments and counterarguments of each analysis accumulated throughout a decade (see also Fischer 2006; Sandler & Lillo-​Martin 2006). Needless to say, the discussion here cannot do justice to the numerous arguments and counterarguments proposed by the two groups of researchers but can merely point out the main observations, claims, and some of the arguments. Both research groups agree for ASL that int-​signs can remain in situ (2a) or occur in doubling constructions where one sign appears s-​initially and the other s-​finally (2b).2                           

(2) a.

  wh

T E ACH ER LIPREAD WH O YES TERDAY

‘Who did the teacher lipread yesterday?’                              

b.

in situ (ASL, Neidle et al. 2000: 111)

wh

WHAT NANCY BUY YESTERDAY WHAT

‘What did Nancy buy yesterday?’

doubled (ASL, Petronio & Lillo-​Martin 1997: 27)

The researchers do not focus much on the syntactic analysis of in situ constructions such as (2a) above but seem to adopt the general analysis proposed for other languages with in situ interrogative words at the time, namely, that the wh-​features of the sign and the C are checked at LF (Huang 1982). They disagree on the analysis of doubling constructions, which is discussed in the following section. Proposals for the doubling constructions constitute the basis for the analyses of the single (s-​initial or s-​final) int-​ sign interrogatives.

11.2.1.1  Doubling constructions Both groups of researchers assume that in wh-​doubling constructions one of the int-​signs undergoes wh-​movement to a [Spec, CP] to check wh-​features and the other one has an information-​structural function: focus or topic, depending on the analysis. 234

235

Content interrogatives

In the LMA analysis, represented in Figure  11.1, Petronio & Lillo-​Martin (1997) argue that the s-​initial int-​sign undergoes wh-​movement to a left-​branching [Spec, CP] (Petronio 1993). Here the [+wh] feature of the wh-​phrase is checked against the [+wh] feature in C. The s-​final int-​sign is a base-​generated focus ‘double’, occupying the right-​ branching C head which bears both the [+focus] and the [+wh] features. The wh-​moved int-​sign, the ‘twin’, is also marked [+focus], and it functions as a focus operator which undergoes LF raising to [Spec, CP]. In this position, it enters into a spec-​head relation with the ‘double’ in C whose [+focus] feature is checked.

Figure 11.1  Leftward Movement Analysis (Petronio & Lillo-​Martin 1997: 27)

In the RMA analysis, visualized in Figure  11.2, the s-​final int-​sign undergoes wh-​ movement to a right-​branching [Spec, CP] and the s-​initial int-​sign is a base-​generated topic in a left-​branching [Spec, TopicP].

Base-generated topic WHAT

CP

C’

IP

CSpec WHAT C° [+wh]

NANCY BUY t YESTERDAY

Figure 11.2  Rightward Movement Analysis (Petronio & Lillo-​Martin 1997: 27)

235

236

Meltem Kelepir

The LMA group’s account has two major components:  arguing that final int-​signs show properties of focus, similar to other focus doubling constructions in ASL (Petronio 1993) and that they are heads, not phrases, thus, occupying the head position of CP, and not the Spec of CP as claimed by the RMA group. Petronio & Lillo-​Martin (1997) claim that s-​final int-​signs cannot be full-​fledged wh-​phrases, as shown in (3b), parallel to other non-​wh doubles discussed in Petronio (1993: 135) (see also Sandler & Lillo-​Martin (2006: 442) for other parallelisms between wh-​and non-​wh doubles and Wilbur (1995) for a treatment of s-​final full wh-​phrases as part of echo questions).                        

(3) a.

whq

W HICH COMPU TER JOH N BU Y WHICH

‘Which computer did John buy?’                          

b. * W HICH

  whq

COMPU TER JOH N BU Y WHICH COMPUTER

(ASL, Petronio & Lillo-​Martin 1997: 33) Hence, they claim that the s-​final int-​signs as in (3a) are base-​generated in C with a [+focus] feature in addition to a [+wh] feature. However, Neidle et al. (2000: 137) object to this claim on the grounds that it makes wrong predictions regarding the position of the modal relative to negation and claim that such s-​final signs are tags and not focus doubles. They also argue that phrasal wh-​elements are possible according to their informants, as in example (4).    

                wh

(4) [ti DIE ] TP [WH O POS S MOTH ER] i ‘Whose mother died?’

(ASL, Neidle et al. 2000: 136)

Notice also that since Petronio & Lillo-​Martin (1997) assume that in situ wh-​phrases move to [Spec, CP] at LF, this analysis predicts that ASL must allow interrogatives in which the twin is in situ and the double is s-​final. They do not discuss this possibility but Neidle et al. (2000) report that such configurations are not possible. In the RMA analysis of doubling constructions, represented in Figure 11.2, there are two main claims: one is that the s-​final int-​sign has undergone rightward wh-​movement and occupies the [Spec, CP]. The researchers support this claim by reporting two observations: that full interrogative phrases (henceforth, int-​phrases) can occur s-​finally and that the twin of the s-​final int-​phrase cannot be in situ. The latter is argued to show that the LF-​movement of the in situ phrase to [Spec, CP] would be blocked, since this position is already occupied by the s-​final phrase (see Section 11.2.1.7 for a discussion of the grammaticality of this configuration in Libras). The other main claim is that the s-​initial int-​sign has not undergone wh-​movement but is base-​generated there in [Spec, TopicP]. This claim is supported both by topic non-​ manual marking co-​occurring with these signs and by the observation of topic properties identified in Aarons (1994). See Neidle et al. (2000: 115–​117) for a number of arguments for the topichood of s-​initial int-​signs and Sandler & Lillo-​Martin (2006:  441f.) and Wilbur (1995, 2011:  162) for a detailed criticism of the analysis of initial int-​signs as topics. 236

237

Content interrogatives

Recall that in both of the analyses discussed above, one of the int-​signs is assumed to have undergone wh-​movement to a [Spec, CP]. This implies that ASL should also have interrogatives with single int-​signs at a [Spec, CP] –​s-​initially for the LMA approach and s-​finally for the RMA approach. The empirical question is whether these are attested. There are contradicting reports on the judgments of the signers: the consultants of the RMA group do not accept questions with s-​initial single int-​signs, but strongly prefer those with s-​final int-​signs, and the consultants of the LMA group accept questions with s-​final single int-​signs only under certain specific circumstances.

11.2.1.2  Single sentence-​initial interrogative signs Neidle et al. (2000: 127) report that configurations such as (5) below are rejected by their informants.3          

(5)

wh

* W HO J OHN H ATE ‘Who does John hate?’         

(ASL, Neidle et al. 2000: 127)

Petronio & Lillo-​Martin (1997: 50), on the other hand, report that the judgments they have received regarding configurations such as (5)  are not uniform and they offer two possible explanations for the non-​uniformity, based on their claim that s-​final int-​signs are focus doubles: for some signers it may be the case that the C head with the [+focus] and [+wh] features must be filled overtly, for some stylistic reasons (Petronio & Lillo-​ Martin 1997: 50–​52). Also, speakers may differ with respect to whether they treat interrogative elements as inherently focused.

11.2.1.3  Single sentence-​final interrogative signs The structures with single s-​final int-​signs are straightforward cases for the RMA group since they analyze them as the int-​sign having rightward-​moved to [Spec, CP].4 Recall from Section 11.2.1.1 that they also claim, contra Petronio & Lillo-​Martin’s (1997) reports, that their consultants find full s-​final int-​phrases such as (4) perfectly acceptable. Petronio & Lillo-​Martin (1997) report that the judgments of their consultants vary for structures such as (6) as well, and they argue that for those speakers who accept them as grammatical, they in fact involve doubling in which the twin (in a left [Spec, CP]) is not pronounced, based on the fact that content interrogatives without int-​signs are possible (Lillo-​Martin & Fischer 1992), and twins in focus doubling constructions can be left unpronounced in appropriate contexts.        

(6)

whq

BUY C AR WH O

‘Who bought the car?’

(ASL, Petronio & Lillo-​Martin 1997: 36)

Furthermore, they claim that some of these constructions may involve either multi-​ sentence discourses where a presupposed constituent is given and then a question relating to this presupposition is asked (Petronio & Lillo-​Martin 1997: 47), or topicalization of VPs, stranding the int-​sign s-​finally, as in (7), where ‘t’ stands for topic non-​manual marking. See Neidle et al. (2000: 142–​144) for a criticism of these proposals. 237

238

Meltem Kelepir

t

(7)

whq

BUY C AR 

WH O

‘As for buying the car, who bought it?’  

(ASL, Petronio & Lillo-​Martin 1997: 46)

11.2.1.4  Role of the non-​manual markers LMA and RMA also differ in the way they analyze the spreading domain and the function of the non-​manual markers in ASL content interrogatives.5 They disagree as well on the optionality/​obligatoriness of spreading. According to the RMA, the non-​manual marker is associated with the lexical wh-​ feature of the int-​sign as well as with the wh-​feature of the head of the CP. Spreading of this marker is optional when an int-​sign occupies the [Spec, CP] position, as in (8), but obligatory when it is in situ, as in (9). Optional spreading domain is represented with broken lines. (8)

Optional spreading with rightward-​moved int-​sign  









wh

ti LOV E JOH N WH O i ‘Who loves John?’        (9)

(ASL, adapted from Neidle et al. 2000: 113)

Obligatory spreading with wh-​in-​situ wh

* who love john [+wh]c       

(ASL, Neidle et al. 2000: 113)

This pattern is related to the requirement of the non-​manual marker to be associated with overt, lexical material. If as a result of rightward movement, [Spec, CP] is filled with an int-​sign, as in (8), this requirement is fulfilled, hence the optional spreading. When the int-​sign is in situ, on the other hand, as in (9), [Spec, CP] does not have lexical material; thus, the non-​manual marker must spread over the c-​command domain of [Spec, CP] to be associated with lexical material. The ungrammaticality of (9) shows that the s-​initial int-​sign does not occupy the [Spec, CP]. If it were in [Spec, CP], then it would be grammatical regardless of the spreading domain of the non-​manual marker. The intensity of the non-​manual marker is also claimed to support their proposal. Specifically, they claim that the maximal intensity of the non-​manual marker correlates with the syntactic positions of the source of the [+wh] feature (Bahan 1996). When the structure contains two sources, e.g., an s-​initial int-​sign and the s-​final wh-​feature of C, then the non-​manual marker is intense throughout the clause since the intensity perseverates between the two sources. When, on the other hand, the structure contains a single int-​sign at the end of the clause, the intensity of the non-​manual marker diminishes towards the beginning of the clause as the distance from the source, C, increases. Petronio & Lillo-​Martin (1997), on the other hand, consider the non-​manual marking in content interrogatives as associated with the [+wh] and [+focus] features of the head of CP (see also Watson (2010), where it is argued, following Petronio (1993) and Wilbur (1996), that the intensity of non-​manual marking is increased in s-​final position because it is a prominence position). Moreover, contra Neidle et  al. (2000), they claim that spreading is obligatory. 238

239

Content interrogatives

11.2.1.5  Long-​distance extraction of interrogative signs There is very little discussion in the literature on embedded structures where an int-​ sign seems to be extracted from an embedded clause. Lillo-​Martin (1990) discusses such structures and concludes that they are not grammatical in ASL, attributing the ungrammaticality to the barrierhood of CPs in ASL (Chomsky 1986; Lasnik & Saito 1994). Petronio & Lillo-​Martin (1997: 52f.), however, report that these structures do occur, albeit rarely, in natural conversations and the judgments vary.        

(10)

            whq

W HO J OHN TH IN K MARY LIK E

‘Who does John think Mary likes?’

(ASL, Petronio & Lillo-​Martin 1997: 52)

Aarons et al. (1992) and Neidle et al. (2000: 121) argue that extraction is possible when the int-​sign is moved rightward to the s-​final position, as in (11). Similar to their observations for simplex interrogatives, the non-​manual marker either spreads only over the clause-​ final int-​sign (in [Spec, CP]) or over the entire interrogative. wh

(11) [[teacher expect [[ti pass test]TP2 t]CP2]TP1 w h o i]CP1] ‘Who does the teacher expect to pass the test?’                         (ASL, adapted from Neidle et al. 2000: 121) To summarize the discussion so far, initial works by the LMA and the RMA group assumed that ASL has wh-​movement triggered purely by morphosyntactic wh-​features and ignored the question when the int-​sign remains in situ. Notice that if wh-​movement is triggered by feature checking requirements, the question why it is ‘optional’ is not trivial. Neither of the groups discusses this issue. Later on, careful work on what kind of contexts certain configurations of content interrogatives are acceptable has revealed that different configurations may correlate with different information-​structural properties of these structures. The effect of context is discussed in detail in Petronio & Lillo-​Martin (1997) as well as in Neidle et al. (2000) and becomes even more prominent in Neidle (2002).

11.2.1.6  Sentence-​final interrogative signs undergoing focus movement Neidle (2002) develops a new analysis of s-​final int-​signs built upon the RMA whereby ASL is perceived as a wh-​in-​situ language which allows scrambling of int-​signs triggered by information-​ structural motivations (for discussion of information structure, see Kimmelman & Pfau, Chapter 26). Neidle (2002) claims that there is a subtle difference between those interrogatives with in situ and those with s-​final int-​signs in terms of the presuppositions they invoke. When the int-​sign is s-​final, as in (12a), there is a presupposition that someone did arrive, whereas when the int-​sign is in situ, as in (12b), no such presupposition is found. “Nobody” would be an odd answer to (12a), but would be unremarkable as an answer to (12b).

239

240

Meltem Kelepir

     

(12)

a.

     

b.



[ARRIVE] TP 



wh WH O

    wh

W HO  [ARRIVE] TP

‘Who arrived?’

(ASL, Neidle 2002: 76)

She attributes this difference in presuppositions to the focus interpretation of the int-​sign in (12a) and proposes an arrangement of functional categories in ASL where a focus phrase (FP) dominates TP and is dominated by CP:

Topic

CP

FP

TP

Figure 11.3  Position of focus phrase vis-​à-​vis CP and TP in ASL (Neidle 2002: 87)

In constructions such as (12a), the int-​sign has a focus feature, and it first moves to [Spec, FP], and then to the (right-​branching) [Spec, CP]. When the int-​sign is in situ, it lacks the focus feature and thus, does not move to [Spec, FP]. Even though it is not stated overtly, the implication here is that for an int-​sign to move to [Spec, CP], it has to first go through FP. Neidle (2002) supports the proposal for a FP by referring to Aarons’s (1994) work on different types of topic in ASL, and claims that Aarons’s “moved topics” such as conditional clauses are in fact focus phrases. She argues that there cannot be more than one focused phrase in the left periphery and if there is already a focused constituent in the interrogative, then the int-​sign cannot move to the s-​final position, as in (13b), since the [Spec, FP] that it has to go through would already be occupied. In those cases, it has to remain in situ, as in (13a).6 (13)

Context: Sue forgot the umbrellas again. conditional         

a.

RAIN  

      wh

[WH O BAWL-​O U T S U E]

‘If it rains, who will bawl out Sue?’ conditional       

b. ? RAIN   

        wh

[BAWL-​O U T S U E] WH O  

(ASL, Neidle 2002: 83)

Neidle (2002) specifically shifts the focus from wh-​movement to the in situ constructions as the unmarked configuration for content interrogatives, attributing a pragmatic motivation to the derivations involving movement. The discovery that different configurations 240

241

Content interrogatives

of interrogatives are not really freely interchangeable but interact with information structure is highlighted more strongly in more recent work on ASL interrogatives, especially in Abner (2011) and Churng (2011) (see Sections 11.2.1.8.1 and 11.2.4). The question regarding the direction of wh-​movement and the position of [Spec, CP] is raised again in further studies on ASL and other sign languages, which provide different perspectives and insights to the analysis of content interrogatives. These are discussed in the remainder of this section. One study differs from the majority of the works on this issue in that it argues against any movement for int-​signs; this study is discussed in Section 11.2.1.9.

11.2.1.7  A linearization account for wh-​doubling constructions in Libras Nunes & Quadros (2006) extend Petronio & Lillo-​Martin’s (1997) analysis to wh-​ doubling constructions in Libras, and argue that these constructions involve adjunction of the int-​sign to an emphatic focus head (E-​focus) and the phonetic realization of more than one link of a wh-​chain (recall Neidle’s (2002) focus movement analysis of s-​final int-​signs discussed in the previous section). They develop their analysis based on interrogatives where one copy of the int-​sign is in situ and the other at the right edge of the clause (14).7 (14)

J OHN SE E WH O YES TERDAY WH O

‘Who exactly did John see yesterday?’                       (Libras, adapted from Nunes & Quadros 2006: 463) Following their analysis for doubled verb constructions (Lillo-​Martin & Quadros 2004; Nunes 2004; Nunes & Quadros 2004), they propose the following: (i) ASL and Libras have the following structure:  TopicP > E-​ FocP > TP with left-​branching  heads; (ii) The int-​sign moves and adjoins to the E-​Foc head, leaving a copy in the original position. The TP that contains the lower copy of the int-​sign undergoes remnant movement to [Spec, TopicP] and also leaves a copy behind; (iii) Existence of two copies of the same element creates a problem for linearization and Nunes (2004) proposes that even though the lower copy must be deleted, in wh-​ duplication constructions the moved element morphologically fuses with the E-​Foc head, and since linearization does not apply word-​internally, the fused copy becomes invisible to the linearization mechanism, and does not get deleted. The copy of the remnant-​moved TP, on the other hand, gets deleted, while the copy of the int-​sign in the remnant-​moved TP gets pronounced. With the copy of the TP deleted, the int-​ sign that has fused with the E-​Foc head ends up being pronounced at the end of the sentence. Hence, the order in (14) (see Kimmelman & Pfau, Chapter 26, Figure 26.4, for a structure).8 The authors support their analysis involving movement by showing that the constructions display island effects (Nunes & Quadros 2006: 467). They support the fusion analysis by showing that phrasal constituents such as WH ICH BOOK cannot be doubled since they cannot be fused with the E-​focus head whereas non-​complex constituents can be doubled since they contain only the head, which is available for fusion.9,10 This parallels the 241

242

Meltem Kelepir

analysis of ASL wh-​doubling constructions in Petronio & Lillo-​Martin (1997) discussed in Section 11.2.1.1. The researchers also report that, similarly to ASL, in Libras s-​initial occurrence of a single int-​sign is contingent on the existence of its copy s-​finally.

11.2.1.8  Clefted question analyses This section discusses alternative analyses for interrogatives with single s-​final int-​signs in ASL (Abner 2011) and doubling constructions in Italian Sign Language (LIS, Branchini et al. 2013). Both propose that the constructions under discussion are cleft constructions.

11.2.1.8.1  Interrogatives with single sentence-​final interrogative signs in ASL Abner (2011) discusses structural varieties of content interrogatives in ASL and claims that these varieties encode semantically and/​or pragmatically distinct questions. She considers the wh-​in-​situ configuration as the standard question formation strategy and doubling constructions (WH-​Double) as encoding a type of emphatic focus (recall the analyses of Neidle (2002) and Nunes & Quadros (2006) discussed in Sections 11.2.1.6 and 11.2.1.7, respectively). She mainly focuses on the third type, where the int-​sign is at the right edge (WH-​R), which she argues is in fact a clefted question. Abner shows that these different types of constructions are not interchangeable, and their acceptability varies with context, an observation also mentioned in earlier work by the LMA and the RMA groups. Abner argues against Petronio & Lillo-​Martin’s (1997) analysis that interrogatives with a single s-​final int-​sign are similar to those with doubling with the difference that in the former, the s-​initial int-​sign is phonologically null (see Section 11.2.1.3). She reports two observations:  (i) speakers note that these constructions are not interchangeable since doubling is only felicitous as an emphatic question, and (ii) the s-​final int-​signs cannot be hosted in the same syntactic position as focus doubles since full phrasal material such as W H I C H B O O K is possible in the WH-​R structure, whereas it is not in the WH-​Double. Thus, Abner’s description of the speakers’ judgments parallels that of Neidle et al. (2000) but differs from that of Petronio & Lillo-​ Martin (1997). Recall Neidle (2002)’s claims (Section 11.2.1.6) that interrogatives with a single s-​final int-​sign contain an (existential) presupposition and cannot be answered with a negative quantifier. Abner (2011) proposes that it is because WH-​R constructions are in fact clefted questions. The effect of existential presupposition in clefted constructions is illustrated for English below. (15) Question: What is it that you bought yesterday? Answer: #Nothing.

(Abner 2011: 26)

WH-​ R interrogatives in ASL are argued to display three properties found in cleft constructions: existential presupposition, exhaustivity, and contrastivity, whereas wh-​in situ interrogatives do not have these properties. Regarding the syntactic properties of clefted questions, Abner (2011) argues that the semantic properties observed are the result of the following independently motivated operations: the movement of the int-​sign to [Spec, FocP] headed by a null copula BE and 242

243

Content interrogatives

the topicalization of the remnant IP (see Figure 11.4). The focus movement is responsible for the exhaustivity and contrastivity, and topicalization generates the existential presupposition.11

Figure 11.4  Structure for ASL involving movement of the int-​sign to [Spec, FocP] headed by a null copula B E and the topicalization of the remnant IP (Abner 2011: 28)

11.2.1.8.2  Wh-​Doubling constructions in LIS Branchini et al. (2013) argue that doubling constructions in LIS involve two identical full copies and that they yield a (focused) clefted question interpretation. In the doubling constructions in their corpus, one copy occurs before the predicate and the other after the predicate, which they argue occupy left-​and right-​peripheral positions and cannot be in situ. They also observe that these constructions are acceptable in contexts with existential presupposition discussed in the previous section, suggesting a clefted question analysis. They propose that the s-​final copy occupies a right-​branching [Spec, CP], in line with RMA and with analyses of s-​final interrogatives in LIS (Cecchetto et al. 2009). The first copy occupies the [Spec, FocP]. These copies are the tails of two movement chains: wh-​ movement and focus movement. These chains share the foot, which gets deleted at Spell-​ Out (Chomsky 2008). This is why one of the copies cannot be in situ:  LIS obeys the general rule that the foot has to be deleted.12 See Branchini et al. (2013: 178f.) for a comparison of their proposal with those of Neidle (2002) and Abner (2011) for ASL, and of Nunes & Quadros (2006) for Libras. To summarize these studies, sign languages seem to be primarily wh-​in-​situ languages. Constructions where interrogative signs occur in other positions including doubling serve various pragmatic functions.

11.2.1.9  ‘No movement’ analysis Bouchard & Dubuisson (1995) argue against a basic, underlying word order for sign languages and in contrast with the studies discussed so far in this chapter, they argue against the assumption that there is a designated position for int-​signs in the structure and that they move from their base position to that designated position. They criticize 243

244

Meltem Kelepir

the previous analyses for having to posit a number of scrambling movements, and instead propose that int-​signs have no base position and can occur anywhere in the structure as long as their relation to the verb/​predicate they are semantically associated with (as an argument or an adjunct) is established, and the sign order does not create difficulty in articulation. They report that Quebec Sign Language (LSQ) signers tend to put int-​signs towards the end of the question, and they explain this fact by referring to a general tendency in sign languages to express first the ground and then the figure. See Kegl et al. (1996) for a response to the criticisms and the alternative analysis proposed in this work.

11.2.1.10  Accounts for the contrast between sign and spoken languages The discussion in this chapter has so far mostly focused on the empirical and theoretical question whether int-​signs undergo wh-​movement to a specifier position on the left or on the right. Recall that the conceptual objection of the LMA group to the RMA group’s claim is that such an analysis implies that ASL is odd typologically since rightward wh-​ movement is very rare (or non-​existent) in the large number of spoken languages that have been studied. However, subsequent work on a number of other sign languages showed that in many of these languages when the int-​phrase is not in an in situ position, it appears most naturally in the s-​final position (Zeshan 2004, 2006; i.a.). Thus, the question is now reformulated as the following: Why do sign languages pattern so differently from spoken languages in content interrogatives? Cecchetto et al. (2009) set out to answer this typological puzzle and offer an explanation based on the function of the non-​manual markers in these constructions. The proposal is initially developed based on data from LIS and then extended to other sign and spoken languages. The authors argue for an RMA analysis for LIS (Cecchetto et al. 2009: 287–​290) and report that the base position of the int-​phrase determines the spreading domain of the non-​ manual marker in both wh-​movement and in situ cases. If the int-​phrase is the direct object, the spreading domain starts with the base position of the direct object and extends to the right edge of the clause, excluding the subject, as in (16a) and (17a). If the wh-​phrase is the subject, then the non-​manual marker spreads over the entire clause, as in (16b) and (17b). (16) a. s-​final direct object    

  wh

GIANN I EAT WH AT

‘What does Gianni eat?’ b. s-​final subject       SOMEON E G IAN N I S EE D ON E.  

‘Someone saw Gianni. Who saw Gianni?’ (17) a. in situ direct object            

    wh

G IANNI SEE WHO

  wh

M ARIA WH ICH i D RES S i BUY

‘Which of those dresses did Maria buy?’ 244

245

Content interrogatives

b. in situ subject    

  wh

W HO ARRIVE

‘Which of them arrived?’       

(LIS, Cecchetto et al. 2009: 294f.)

The authors propose that the spreading domain of the non-​manual marker marks the dependency between the base position of the int-​sign and the C head with the wh-​feature. Crucially, this is established after the manual signs are linearized, and does not interact with the hierarchical structure, i.e., does not, for instance, mark the c-​command domain of the C head or the wh-​operator. They discuss this dependency within the Probe-​Goal theory (Chomsky 2001), whereby the non-​manual marker (or the movement of the int-​sign) shows that a relation holds between the Probe, the wh-​feature in C, and its Goal, the [Spec, CP]. Thus, for Cecchetto et  al. (2009), LIS employs more than one strategy to mark this dependency:  either movement of the int-​sign or the spreading domain of the non-​manual marker. In the case of movement, the spreading of the non-​manual marker is optional: it can occur with the int-​phrase only (since it is the lexical feature of the int-​sign) or spread over the dependency domain. If the int-​phrase is in situ, however, the only strategy available to mark the dependency is the spreading of the non-​manual marker. This is why it is obligatory. The authors further develop a theory to explain why in most sign languages, int-​signs tend to occur sentence-​finally. They claim that in languages such as LIS where C is on the right, for the non-​manual marker to mark the wh-​dependency, the [Spec, CP] has to be at the right edge as well. Here is how the argumentation goes: Languages have linearization algorithms that dictate how a hierarchical structure gets linearized at the interface between syntax and phonology (Chomsky 1995). Based on the observation that in an overwhelming number of languages, phrases seem to have their specifiers on the left, they propose that the linearization algorithm by default puts the specifier of a category XP to the left of the lexical material contained in X’. In LIS clauses, there is an additional mechanism related to dependency marking of non-​manuals that overrides this default linear order and forces the specifier of CP to be on the right. They argue that ‘linking’ the trace/​position of the int-​phrase with C would not be possible if the int-​phrase moved to the left edge in LIS. If the non-​manual marker spread only between the s-​initial int-​phrase and its trace position, it would not cover the s-​final C, the Probe, and fail to mark the dependency. If it instead spread over the entire clause, it would not be a reliable marker to show the dependency between the Probe and the Goal. They conclude that sign languages may differ from spoken languages due to their modality: sign languages may employ non-​manual markers to mark movement dependencies, and this marking requires the specifier of CP to be on the same side as its head.13 It is crucial at this point to emphasize that since the authors relate right-​peripheral occurrence of specifiers to the non-​manual marking of movement dependencies, in order to be able to generalize the explanation to all the sign languages that prefer right-​ peripheral int-​phrases, this account must assume that in those languages, as well, the function of the non-​manuals is marking the dependency. Thus, if a non-​manual marker is shown not to mark a movement dependency, then the specifier of the head which is the Probe of this dependency relation is predicted to be on the left as a result of the default linearization algorithm. 245

246

Meltem Kelepir

They suggest that head-​ initial sign languages where the non-​ manuals mark wh-​ dependencies would be an obvious testing ground for their hypothesis. They predict that int-​phrases would have to move leftward (see Cecchetto et  al. (2009:  304–​308) for the discussion of the implications of this analysis for Indo-​Pakistani Sign Language (IndSL) and ASL). This analysis is developed further in Geraci & Cecchetto (2013), where they present new data both from LIS and Finnish Sign Language (FinSL) that, at first sight, seem to constitute counterexamples to the proposal in Cecchetto et al. (2009); however, they argue that these data can be accounted for if language external factors such as processing constraints (Hawkins 2004), in addition to language internal factors, are taken into consideration. Alba (2016) also offers an explanation for the difference between spoken languages and sign languages in the syntactic distribution of wh-​expressions drawing on the differences in the processing constraints of the working memory in the auditory and visual modalities. Alba argues that spoken and sign languages do not differ in the syntactic structure of content interrogatives, but they differ in the way these lexical items are linearized, post-​syntactically. She attributes these differences in linearization to the differences in the way working memory interacts with the processing of content interrogatives. Based on the findings of experimental studies on the performance of the working memory in recalling lexical items, she concludes that the auditory modality favors sequential forward recall, whereas the visual modality does not have a clear preference regarding the direction of recall. She suggests that this may be the reason why wh-​expressions can occur sentence-​finally in sign languages, whereas it is impossible or rare in spoken languages. More specifically, the preference for the forward recall may be prohibiting spoken languages from putting the gap (the trace of the wh-​phrase) before the filler (the overt wh-​phrase), since such a configuration would require the speaker/​hearer to hold the gap in the working memory until s/​he gets to the filler, resulting in a backward recall. In sign languages, on the other hand, the lack of preference for the direction of recall makes the final position available. Alba also argues that at least in Catalan Sign Language (LSC), the final position is the position for focused elements [Spec, FocP], and this is why this position is the default position for wh-​expressions in LSC.

11.2.2  Question particles as clause-​typers Aboh et  al. (2005) argue for the existence of a question particle in Indian Sign Language (IndSL), glossed as G -​W H , which occupies the head of an Interrogative Phrase (IntP) and clause-​types the clause as an interrogative. G -​W H is a general int-​ sign which occurs either by itself or together with nouns such as P L AC E and N U M B E R to express meanings such as ‘where’ and ‘how many’ (Zeshan 2003). It can only occur in the s-​final position, and it cannot be doubled. When it combines with a noun such as P L AC E , this noun can either stay in situ (wh-​split structure) or occur to the left of G -​W H ( for examples, see Figures 11.5 and 11.6) . Consequently, IndSL is claimed to allow for phonologically null wh-​phrases when they are contextually recoverable, as mentioned earlier for ASL. The authors propose (18) as the functional structure for IndSL and assume that IndSL is head-​initial, with other surface orders derived by means of movement of lower 246

247

Content interrogatives

functional phrases such as FocP and FinP to the specifiers of higher functional phrases such as TopP and InterP. (18) ForceP > TopP > InterP > FocP > FinP They offer three derivations for three types of configurations and argue that InterP is active in all types of questions, while FocP is active only in some. For simple content interrogatives such as (19), they propose that there is no FocP, and FinP moves to [Spec, InterP], headed by G -​W H . (19) [InterP [FinP IND EX 2 FRIEN D S LEEP ] G -​W H tFinP] ‘Where does your friend sleep?’ (IndSL, adapted from Pfau 2006: 8, 11) The other two configurations they analyze are focused questions and include a FocP, as in (18). One possibility in a focused question is to strand the associate phrase such as PLACE in situ, as illustrated in Figure 11.5.

Figure 11.5  Structure for IndSL wh-​question with stranded associate phrase P LAC E (Aboh et al. 2005: 35)

The authors analyze PLACE in Figure 11.5 as a pronominal element which is bound by the null operator in [Spec, FocP]. FocP moves to [Spec, InterP] to check the strong EPP feature (Chomsky 2001) of Inter. This results in clause-​typing the clause as interrogative (Cheng 1991; Cheng & Rooryck 2000). Notice that as a result of this movement, G-​W H , the head of Inter, ends up being linearized as the final sign of the clause. The second possible configuration in focused questions is where the pronominal (PL AC E ) occurs left-​adjacent to the particle G -​W H . Here PLACE raises first to [Spec, FocP] to check its focus feature and then to [Spec, InterP] to check the EPP feature of Inter. Movement to [Spec, FocP] triggers movement of the remnant of FocP to [Spec, TopP] (Rizzi 1997; Aboh 2004), as represented in Figure 11.6.

247

248

Meltem Kelepir

Figure 11.6  Structure for IndSL wh-​question with associate phrase P LAC E left-​adjacent to G -​W H (Aboh et al. 2005: 36)

The non-​manual marking is triggered by the interrogative feature in Inter and can either occur only over the G -​W H in Inter or can also spread over the material in its specifier. In a configuration such as the one in Figure 11.5, the optional spreading covers the entire clause since FocP is in [Spec, InterP]. In a configuration such as Figure 11.6, on the other hand, the non-​manual occurs only over the G-​W H or on this head and on the phrase in its specifier but not over the material (FocP) in [Spec, TopP], since absence of non-​manual marking marks topics in IndSL. To summarize, Aboh et al. (2005) identify a question particle in IndSL, which they claim to clause-​type a clause as an interrogative by occupying the head of an Interrogative Phrase. Clause-​typing has been attributed to some non-​manual markers in other works (see 11.2.3.2 below).

11.2.3  Form and functions of non-​manual marking in content interrogatives It has been observed that some languages use brow position (raising vs. lowering), others head position (up vs. down, forward vs. backward) and others chin position (chin up vs. chin down) to mark polar vs. content interrogatives (Zeshan 2004; Cecchetto 2012; i.a.). Furthermore, works such as Schalber (2006) and Watson (2010) classify non-​manuals in content interrogatives as either primary or secondary markers. The bundle of non-​manual markers has been reported to spread either only on the manual sign, over a sub-​clausal domain, or the entire clause, typically excluding the topic (Meir 2004: 105; Sandler & Lillo-​Martin 2006; Göksel & Kelepir 2013: 14; i.a.). In early work in the generative linguistics literature, different non-​manuals observed in content interrogatives are considered to spread together and to have a grammatical function as a bundle. However, spreading properties and the function of these bundles of non-​manuals has been another debated issue. 248

249

Content interrogatives

Most of the earlier studies (see Section 11.2.1) and later ones that will be discussed in this section, which focus specifically on the functions of non-​manual marking, treat non-​manual markers as realizations of morpho-​/​semantic-​/​syntactic features. Sandler & Lillo-​Martin (2006: 459–​471) differ from these in that they argue against this assumption. They claim that the non-​manual markers instead have pragmatic functions, such as, for instance, desiring an answer vs. using an interrogative while knowing the answer. In this respect, they are similar to intonation in spoken languages. Furthermore, they argue that the spreading domain of the non-​manual marker does not necessarily correspond to a syntactic constituent, and hence, it does not mark the c-​command domain of a functional head. Rather, it is associated with intonational phrasing, predicting non-​isomorphism between syntax and prosody. Their examples include structures where the spreading of the non-​manual marker is interrupted by elements such as parentheticals, adverbs, and topics. The studies discussed below consider the functions of specific non-​manual markers. Cecchetto et  al. (2009) also attribute a novel function to (a bundle of) non-​manual markers, namely, marking wh-​dependency. This was discussed in Section 11.2.1.10.

11.2.3.1  Markers of the scope of the [+wh] operators Dependency marking as proposed in Cecchetto et  al. (2009) attributes a function to a bundle of non-​manual markers. In contrast, Wilbur (2011) analyzes the distribution and spreading domains of specific non-​manual markers and concludes that different markers are instantiations of different types of operators: monadic vs. dyadic. In content interrogatives, brow lowering is associated with [+wh] in C, marking the presence of a monadic wh-​operator, and is not a lexical feature of the int-​sign (see also Weast 2008). Its spreading domain marks the c-​command domain of the [+wh] operator, which does not have a restriction and thus, has simple scope (see Wilbur (2011: 162–​164) for a discussion of the co-​occurrence of int-​signs with brow raising in certain constructions which are not matrix content interrogatives).

11.2.3.2  Functions of individual non-​manual markers Recall from the introduction of this section that most of the early work on content interrogatives treats all the non-​manual markers as one ‘bundle’ and usually represents them as having only one function, namely, being a ‘wh-​question non-​manual’ and as having the same spreading domain. A more recent work, Watson (2010), specifically focuses on different non-​manuals in content interrogatives in ASL, and investigates their functions, intensity, and spreading domains (see also Wilbur & Patschke 1999; Weast 2008; Wilbur 2011). Watson differentiates between primary and secondary markers in ASL and proposes that the primary marker, brow-​lowering, marks the utterance as a content interrogative, whereas the secondary marker, headshake, marks WH -​ (and NEG-​) signs. Other non-​manual markers have more pragmatic (and prosodic) functions than marking the clause-​type. Göksel & Kelepir (2013) argue that in Turkish Sign Language (TİD), two properties of the articulator ‘head’ have two distinct but related functions: (i) the head tilt (keeping the head tilted slightly forward or backward, which is achieved by tensing the neck muscles) signals that the utterance is an interrogative, as opposed to the lack of sustained head tilt which signals

249

250

Meltem Kelepir

that it is declarative, and (ii) forward vs. backward position of the head are the two values of the parameter head tilt and signal whether the interrogative is polar vs. content, respectively (but see Gökgöz 2010; Gökgöz & Arık 2011; Makaroğlu 2012). Head tilt is observed to spread over the entire clause and is obligatory. Thus, the authors consider this non-​manual to be the clause-​typer, i.e., typing the clause as interrogative. Moreover, content interrogatives are also reported to have headshake, ‘hs’, the spreading domain of which varies but clearly differs from head backward ‘hb’ (a value of head tilt) in that it excludes topics.            

(20)

hb hs

C OUNTRY WH AT

‘What is his/​her country?’

(TİD, Göksel & Kelepir 2013: 13)

The authors argue in earlier work (Göksel & Kelepir 2011) that head tilt is the realization of a morphosyntactic feature borne by the syntactic head Question Mark (Higginbotham 1993) associated with interrogative force (cf. Cheng & Rooryck 2000; Aboh et al. 2005; Aboh & Pfau 2011). In Göksel & Kelepir (2013), they raise the possibility that head tilt/​head backward and headshake may be instantiations of different functional heads responsible for the expression of the force of the clause and categories such as focus or wh-​operators, respectively. They conjecture that this would explain the different spreading domains of these non-​manual markers described above. Furthermore, the authors highlight the implication that different non-​manual markers in a given construction may have distinct functions which produce combinatorial meanings, a line of research pursued in Dachkovsky & Sandler (2009) (see Göksel & Kelepir (2013: 20–​22) for a discussion of the parallelism between the compositionality of prosodic components in spoken and sign languages). Note, finally, that this work attributes a mood function (interrogative) to an abstract non-​manual feature (head tilt) which manifests itself phonologically in different values in different sub-​types of interrogatives. The authors note (Göksel & Kelepir 2013: fn. 22), citing an anonymous reviewer, that a similar claim may be made for ASL non-​manuals in interrogatives: that brow position may be considered to be the parameter clause-​typing the clause as interrogative, with brow raising and brow lowering as its values marking polar vs. content interrogatives respectively. Recall from Section 11.2.3.1 that Wilbur (2011) argues that different positions of the brow mark different types of operators. Given the parallelism between these observations on TİD and ASL and Wilbur’s theoretical claim, a potential area of research would be to investigate whether non-​manuals of interrogatives in other sign languages display the same “one feature –​two values” pattern, as proposed in Göksel & Kelepir (2013), and whether these non-​manuals in fact are realizations of a certain type of operator, as proposed in Wilbur (2011). Further research may also shed light on what exactly clause-​typing means and what a non-​manual marker really marks: a morphosyntactic feature such as [+Q] or [+wh], marking a clause as interrogative, or the presence and scope of an operator, marking a clause as a question. Analysis of non-​ manual markers in embedded interrogatives may tease these apart (see Section 11.2.5.1).

11.2.4  Multiple wh-​questions Churng (2011) argues that ASL has three different types of multiple wh-​questions with the same word order, such as YOU EAT WH AT WHY (see also Wood 2007; Abner 2011). 250

251

Content interrogatives

These three types have different prosodic properties and interpretations which are the result of different syntactic structures. Churng’s (2011) account is based on her previous LMA-​inspired analyses of single s-​ final and doubled int-​signs in ASL (Churng 2006, 2007). Churng (2006) argues that single int-​signs in s-​final position are focused (Petronio & Lillo-​Martin (1997); Watson (2010); see also Neidle’s (2002) proposal in 11.2.1.6 and Abner’s (2011) proposal in Section 11.2.1.8.1; i.a.). They move to a leftward [Spec, FocP], and the remnant TP further moves leftward to left-​branching CP (above FocP), leaving them as s-​final. Churng (2007) further argues that in doubling constructions, a complex DP carries two occurrences of the same head, and the two move independently. The head moves to the focus position in [Spec, FocP], and the int-​phrase moves to [Spec, CP] via regular wh-​movement. Then the remnant of TP moves and adjoins to CP. With this background in mind, let us consider the three different types of multiple wh-​ question constructions she identifies. Churng (2011) assumes the following hierarchical structure: CP2 > FocP > CP1 > TP. In the first type of multiple wh-​question, illustrated in (21), the speaker expects multiple answers such as ‘I ate x because…, I ate y because…’. W HAT wh-​moves first to leftward [Spec, CP1] and then to leftward [Spec, CP2] while WHY focus-​moves to a leftward [Spec, FocP], which is above CP1. Finally, the TP remnant YOU E AT moves and adjoins to CP2, which is above FocP. The landing sites of the two int-​ signs trigger different non-​manual markings, i.e., wh-​ and wh-​plus focus.  

(21)

YOU E AT,  

wh WH AT,  

foc

stacked wh-​question ‘What foods did you eat for what reasons?’         WH Y

(ASL, Churng 2011: 28)

The other two types of multiple wh-​questions, Churng analyzes as involving coordination of two clauses. In both (22) and (23), the speaker expects a single answer; however, with a slight difference represented as “at all” vs. “it” readings (Gračanin-​Yüksek 2007; Citko 2013). In both constructions, the int-​signs have the same non-​manual markers.   foc  

(22)

  wh

WH AT, WH Y coordinated; at-​all reading ‘What did you eat, and why did you eat at all?’ (ASL, Churng 2011: 29) YOU E AT,



(23)

wh

wh

  foc   wh

WH AT, WH Y coordinated; it-​reading ‘What did you eat, and why did you eat it?’  (ASL, Churng 2011: 31) YOU E AT,

As for the syntactic derivation of these constructions, she proposes a Multi-​Dominance (also known as Sharing or Parallel Merge) analysis (Gračanin-​Yüksek 2007; Citko 2013). In this analysis, these constructions consist of coordinated structures with a null coordinator. In both, two FocPs are coordinated under andP, and the focused int-​signs WHAT and W HY move to the leftward specifier positions of these two FocPs. The two constructions differ in the following: in the at-​all reading in (22), two TPs undergo remnant movement from parallel structures to a CP above andP. In the it-​reading in (23), the TP [YOU EAT ] is shared by the two conjuncts, and it is this shared TP that undergoes remnant movement. 251

252

Meltem Kelepir

Regarding the prosodic properties of these constructions, Churng claims that int-​ signs are not inherently focused and that there are focused int-​signs with both wh-​ and focus non-​manual marking (brow lowering and head thrust) in addition to the regular, non-​focused int-​signs which have only wh-​non-​manual marking (brow lowering). Churng (2011) proposes that after each displacement operation, non-​manuals will be reset, which will result in prosodic breaks and/​or change in non-​manual marking (Bresnan 1971; Wagner 2005). The stacked multiple wh-​question in (21) involves a wh-​movement and a focus movement, and thus, there is a prosodic break between the two int-​signs and each is marked with a different non-​manual marker. In (22) and (23), on the other hand, both int-​signs move simultaneously from separate conjuncts into specifiers of FocP, and thus, no prosodic reset occurs and both are marked with focus and wh-​non-​manuals (see also Wilbur, Chapter 24, Figure 24.2).

11.2.5  Embedded content interrogatives Embedded content interrogatives are crucial for the theoretical analysis of sign languages in at least the following aspects: (i) the distribution of the int-​signs in embedded clauses constitutes a potential argument for the direction of wh-​movement; (ii) the different distributions of the non-​manual markers in matrix and embedded interrogatives may help identify the syntactic and semantic functions of the non-​manual markers; and (iii) the analysis of these constructions contributes to the issue of subordination/​embedding/​ complex sentences in sign languages in general. The data analyzed in the literature can be categorized as follows: one type involves constructions with verbs such as ‘wonder’ and ‘know’, which are known to take clausal complements in better studied languages. The discussion of such structures in sign languages which lack overt complementizers raises the question whether the clausal constituent containing an int-​sign and co-​occurring with these verbs is the complement of the verb or whether it is in fact a direct question. The properties of such constructions are discussed in Section 11.2.5.1. Another type of construction involves a clausal constituent with an int-​sign followed by a phrase. Section 11.2.5.2 discusses this type, which has been analyzed as a rhetorical question-​answer pair, as a cleft-​construction, and as a question-​ answer construction.

11.2.5.1  Embedded content interrogatives as complement clauses The question whether the string WH AT S U S AN BUY in a string of signs such as MOTHER WONDER W HAT S U S AN BU Y is a direct question or indirect question, functioning as the complement of the main predicate WON D ER, has been answered by contrasting the properties of these potentially embedded constituents with the properties of direct questions. Two crucial differences stand out: non-​manual markers and possibility of doubling the int-​signs. The earlier works on the direction of wh-​movement in ASL also investigate the distribution of int-​signs and non-​manual markers in embedded interrogatives. Petronio & Lillo-​Martin (1997) consider constructions with the predicates KNOW , DON’ T-​K NOW , C URIOUS , and WON D ER , and assume that they take indirect question complements (Lillo-​Martin 1990; Petronio 1993). They stipulate that the indirect questions differ from direct questions in that the former have only the [+wh] feature but not the [+focus] feature, based on the assumption that there must be one focus per sentence and that focus is 252

253

Content interrogatives

a root phenomenon. They support this with two properties they observe: absence of what they call the question non-​manual marker and absence of doubling of int-​signs, which they associate with focus (in contrast with the claims in Lillo-​Martin (1990) and Neidle et  al. (1994); see Sandler & Lillo-​Martin (2006:  452–​454) for a detailed description of these differences).14 Note that even though a number of verbs can take clausal complements containing question words, they differ in their selectional requirements:  whereas verbs such as ‘wonder’ and ‘ask’ have been argued to select for a clausal complement that has a question feature, i.e., a C with [+Q] (Huang 1982; i.a.), verbs such as ‘tell’ and ‘know’ do not have such a requirement. Wilbur (1996: 217, fn. 5) notes a difference in the distribution of wh-​elements depending on the main verb: those of her consultants who in general prefer s-​final int-​signs in matrix interrogatives also accept int-​signs in clause-​final position (as well as in situ) in interrogative complements of main verbs which do not require [+wh] complements, such as TELL , as in (24a). With a main verb such as WON D ER that requires a [+wh] complement, her consultants prefer the int-​sign in clause-​initial position (as well as in situ), as in (24b). (24) a. b.

E L L E N TELL-​M E PAU L BU Y COMPU TER WHICH M ARY WON D ER WH AT S U S AN BU Y

(ASL, Wilbur 1996: 217) (ASL, Wilbur 1996: 250)

Neidle et al. (2000: 190, fn. 24), on the other hand, argue against treating constructions such as (24b) as involving embedded interrogatives. They claim that these are in fact semi-​questions (Suñer 1993), and that the embedded clause displays properties of direct questions in terms of word order and non-​manual marking. Meir & Sandler (2008:  155f.) briefly discuss embedded interrogatives in Israeli Sign Language (ISL) and note that the clausal complement often occurs before the main clause and sometimes the main clause can occur before and after the clausal complement. Hakgüder (2015) shows that, similarly to ASL but in contrast with Libras, embedded constituent interrogatives in TİD do not allow doubling even though direct interrogatives do. The only possible positions of int-​signs are in situ and clause-​final. Furthermore, complements of two different types of verbs, namely ‘ask’-​type and ‘know’-​type, behave differently, supporting previous studies on the syntax and semantics of the complements of these types of verbs (Suñer 1993; i.a.). One such difference concerns the order between the main verb and its clausal complement. While ‘know’-​ type verbs in TİD allow a variety of orders, ‘ask’-​type verbs are restricted to the SOV order. The complements of these verbs also differ in their non-​manual markers. The complements of ‘ask’-​type verbs co-​occur with all the non-​manual markers observed in direct questions, namely, head backward, brow raise, and headshake, whereas those of ‘know’-​type verbs lack head backward, which Göksel & Kelepir (2013) claim marks a TİD clause as a content interrogative. Hakgüder (2015: 108) argues that the choice of the non-​manuals is related to the semantic type of the complement: head backward is found with questions (and not necessarily with all interrogatives, as proposed in Göksel & Kelepir (2013)), and it is absent when the complement type is a proposition (see Section 11.2.3.2). To summarize, research on embedded content interrogatives in sign languages not only has the potential to shed light on the debate on syntax of root interrogatives, the functions of different non-​ manual markers, the syntactic consequences of the 253

254

Meltem Kelepir

information-​ structural properties of interrogatives, and the issue of subordination/​ embedding/​complex sentences in sign languages, but also on the claim that grammar distinguishes between different semantic types of clausal complements.

11.2.5.2  Rhetorical questions, wh-​clefts, or question-​answer clauses? Another construction regarding embedded interrogatives concerns structures such as the sequence of signs L-​E -​E PAIN T WH AT CH AIR. In early works on ASL, such as Baker-​ Shenk (1983), such constructions were analyzed as rhetorical question-​answer sequences, see (25). The non-​manual marked as ‘rhet.q.’ is reported to be brow raising.  

(25)

  t

  _​_​_​_​_​_​_​rhet.q.

P -​A -​T, PEA BRAIN *,  

         

LIVE “WH AT”,  

intense

INDEX-​FAR RT O-​C

‘Pat is really dumb! Where does she live? Way out in Ocean City.’                                         (ASL, Baker-​Shenk 1983: 80) Wilbur (1996), on the other hand, analyzes these structures as wh-​cleft constructions, and argues against treating them as question-​answer pairs involving either rhetorical or echo questions. She proposes that these constructions are syntactically single sentences and function as providing information in a focus format. She also argues that they do not involve right dislocation of the constituent to the right of the int-​sign and the interrogative part of the sentence is not a clausal subject with a predicate nominal. One of the arguments presented is that rhetorical questions where the speaker does not necessarily answer his/​her questions are marked with brow furrow (= brow lowering) as in non-​rhetorical questions, whereas in structures such as (26), the non-​manual is brow raising.          

(26)

        br

L E AV E MY S H OES WH ERE, K ITCH EN

‘Where I left my shoes was in the kitchen.’

(ASL, Wilbur 1996: 216)

Another argument concerns the distribution of int-​signs: in structures such as (26), the int-​sign uniformly occurs clause-​finally, and it cannot be doubled. Both properties are different from regular interrogatives. Regarding the pragmatic functions of these constructions, she shows that they display focus properties (Wilbur 1996: 221–​229). She analyzes these structures syntactically as the following: underlyingly, there is a small clause with a focused phrase, such as KITC HE N in (26), in the subject position, and the wh-​clause is in the predicate position. Since ASL requires focus material to be s-​final, the wh-​clause is preposed and moves to matrix [Spec, CP]. The functional head I, whose complement is the small clause, has the feature [+focus], which triggers this movement. As a consequence, the subject of the small clause, KITC HEN in (26), appears s-​finally. Meir & Sandler (2008: 155f.) report that in ISL, there is a special construction with a subordinated interrogative clause which occurs with a special non-​manual marker, slanted eyebrows, ‘sb’, the translation of which sounds like a clefted clause even though the authors do not specify it as such (27).

254

255

Content interrogatives

               

(27)

  sb

SHIRT IND EX I BU Y WH ERE -​- ​- ​ N EAR S TATION

‘The place I bought the shirt is near the station.’ (Literally: ‘Where I bought the shirt is near the station.’) (ISL, Meir & Sandler 2008:155f.) Caponigro & Davidson (2011) argue against the wh-​cleft analysis in Wilbur (1996) and analyze this construction as involving an embedded interrogative in the subject position of a complex clause (‘Q-​constituent’, an interrogative CP) and an embedded declarative (‘A-​constituent’, a declarative IP) linked by a null identity copula. In example (28), Q stands for ‘Question’, and A stands for ‘Answer’.          

(28)

[ Q-​constituent 

          br JOH N BU Y WH AT], [ A-​constituent BOOK]

‘What John bought is a book.’      

(ASL, Caponigro & Davidson 2011: 329)

They claim that these complex clauses are syntactically and semantically declaratives, and hence, the Q-​constituents are not information-​seeking. The two clauses occur as the subject and the object of the silent copula which semantically behaves like an identity relation (see Kimmelman & Pfau, Chapter 26, Figure 26.3). In support of their claim that the Q-​constituents are interrogatives, they provide the following arguments: (i) they look identical to matrix interrogatives, (ii) they can have the same variety of int-​signs as matrix interrogatives, and (iii) no other construction such as headed or headless relative clauses contain int-​signs. They argue that Q-​constituents show properties of embedding:  they cannot contain doubled int-​signs, and they require brow raising in contrast with brow furrowing (= brow lowering) found in matrix content interrogatives. The authors claim that brow raising on the Q-​constituent is triggered by the null copula (higher predicate), similarly to the spreading of the non-​manual marker of the main verb over the interrogative complement clause proposed in Petronio & Lillo-​Martin (1997). What is crucial is the claim that the non-​manual marking for matrix interrogatives does not change depending on whether the interrogative is information-​seeking or not (i.e., rhetorical, echo questions, etc.). So, they dismiss the potential objection that the difference in non-​manual marking described here may be related to the fact that Q-​constituents are non-​information-​seeking. They further argue that the entire clause is a single declarative and that these constructions do not involve direct quoting. To capture the exhaustivity reading of these constructions, Caponigro & Davidson (2011:  351–​359) propose that the two clauses making up the construction denote the same “exhaustified” proposition. This is achieved by the presence of an answerhood operator applying to the Q-​constituent and an exhaustivity operator applying to the A-​ constituent. The authors further develop a pragmatic analysis of this construction within the framework of the Question Under Discussion Theory (Roberts 1998; Büring 2003), and they propose that the QACs represent an overt instantiation of a sub-​question under discussion and the answer to it.

11.3  Experimental perspectives This main section discusses the few experimental studies that investigate the acquisition and processing of content interrogatives in sign languages. The issues covered include the 255

256

Meltem Kelepir

developmental stages of the acquisition of the int-​signs and of the non-​manual markers in deaf children acquiring ASL from their parents, the emergence and use of int-​signs in a homesign system, the emergence of grammatical non-​manual markers in a young sign language, and the function of acquisition of age as a factor influencing processing of content interrogatives.

11.3.1  Acquisition of content interrogatives Fischer (1974), one of the first works on the acquisition of content interrogatives, reports the findings of a study on deaf children acquiring ASL. She found that the children went through three stages:  in the first stage, the deaf child uses a generalized int-​sign which in adult language is used to express something like “Well?”15 In the second stage, s/​he uses a limited number of int-​signs:  WH ERE, WHAT, and WHO . The use of the latter two is very limited. The author reports that the children seemed not to understand the questions posed to them in these earlier stages, especially when the int-​sign in the question is preposed to the s-​initial position. Around the beginning of the third stage, the child begins to respond appropriately to int-​questions with the int-​sign preposed, and later she adds some new int-​signs such as HOW and H OW-​M ANY . Reilly & McIntire (1991) and subsequent work (see Reilly (2006) for references) analyze the relation between affective facial expressions and linguistic non-​manual signals as they emerge and co-​develop in deaf children acquiring ASL. They start out with the hypothesis that since prelinguistic babies are fluent users of affective and communicative facial behaviors, deaf children born to deaf and signing parents (DOD) will quickly acquire linguistic non-​manual marking in constructions such as conditional clauses, negative sentences, adverbials and wh-​questions. In the development of all these constructions, they found the same surprising pattern: during the one-​sign stage, deaf children produce single manual signs accompanied by facial expressions or mouthing, imitating adult behavior, but they seem to be processing these as holistic units containing manual signs accompanied by facial expressions (see also Gökgöz, Chapter  12). In the case of wh-​ questions, children produce single int-​signs with furrowed brows. However, when they start producing multi-​sign utterances (after the onset of syntax, around 2 years of age), these facial expressions accompanying the manual signs disappear. They produce wh-​ questions with manual int-​signs but with blank faces (the so-​called ‘hands before faces’ phenomenon). The authors propose that this is the stage when they start analyzing the two channels, hands and faces, as independent components, and the signs are no longer holistic packages. Thus, during this stage, deaf children (as well as their parents) seem to view the hands as the primary linguistic articulators and rely on sequential encoding of language rather than the target use of simultaneous articulation. After age 3;0, the children start using some non-​manuals but not always the same as the adults: there are cases where the child uses headshake and/​or head tilt but no furrowed brows. The authors note that this may be the result of variability of non-​manuals in the adult language, referring to the lack of furrowed brows in ASL Motherese (Reilly & McIntire 1990) and their own observations of adult conversations. Children’s questions lack the obligatory non-​manual markers as late as age 4.  Around the age of 5, non-​manual markers of wh-​questions emerge but first they only have scope over the int-​signs, rather than the entire question. Only around age 6 or 7, they seem to have mastered the appropriate non-​manual marking of wh-​questions. Based on their findings on the acquisition of wh-​questions, as well as other constructions such as conditionals, negation, and adverbials, the authors conclude 256

257

Content interrogatives

that the linguistic system seems to be independent and does not make use of the prelinguistic communicative abilities. Lillo-​Martin (2000) discusses the results of a study on the syntactic positions of the int-​signs in child language, also conducted with DOD children acquiring ASL but with a later age range: 4;1–​6;9. She reports that the s-​initial position is the overall most preferred position, especially for subject and adjunct questions. Older children also commonly use double constructions. Other constructions such as null int-​signs and split interrogatives are also found, even with some of the youngest children. Finally, her findings support Reilly & McIntire’s (1991) findings regarding the non-​manual marking: even the 6-​year-​ olds do not consistently use the non-​manual markers of questions appropriately. Lillo-​Martin & Quadros (2006) analyze the distribution of int-​signs in the production data by DOD children acquiring ASL and Libras with an earlier age range of 1;1–​ 3;0. Crucially for their study, they do not find rightward-​moved int-​signs in these data. They report that the earliest productions contain in situ and s-​initial int-​signs, doubling comes a few months later, around the same time when non-​wh focus doubles emerge (Lillo-​Martin & Quadros 2005) with children acquiring ASL but not Libras. Based on these observations, including the fact that both wh-​and non-​wh doubles emerge later and around the same time, they conclude that the late emergence of s-​final int-​signs in child language supports their earlier proposals that these are focus elements, and not the result of regular wh-​movement. Similar questions have been investigated with bimodal bilingual children, children who use both a spoken language and a sign language. Lillo-​Martin et al. (2012, 2016) study the acquisition of content interrogatives by bimodal children acquiring English –​ASL and Brazilian Portuguese –​Libras and analyze the results within their theory of bimodal bilingual acquisition, the Language Synthesis model (for an overview, see Donati, Chapter 27). This model combines the proposals of two approaches to code-​switching:  MacSwan’s (2000) Minimalist approach and den Dikken’s (2011) Distributed Morphology approach (Halle & Marantz 1993). According to the authors’ Language Synthesis model, in the language of a bilingual, while the syntactic structure can be formed based on the syntactic aspects of one language, lexical items (roots and morphemes) can be chosen from either of the languages. Int-​signs in ASL and Libras have been reported to show similar distributions (Nunes & Quadros 2006; i.a.). English and Brazilian Portuguese are also reported to share similarities: int-​signs are moved leftwards, and in situ constructions are possible in certain contexts. Regarding the influence of sign language on spoken language, the researchers report that the bimodal bilinguals use int-​signs predominantly s-​initially, but also with a noticeably higher percentage of in situ/​final and double constructions than the English and Brazilian Portuguese monolingual children. Also, in situ constructions emerge earlier in the bilinguals. As for the influence of spoken language on sign language, they find frequent mouthing and instances of spoken language word order as well as a higher percentage of s-​initial int-​signs than in the speech of monolingual deaf controls.

11.3.2  Emergence of content interrogatives in a homesign system Franklin et al. (2011) identify a flip gesture (i.e., flipping the palm upward, reminiscent of the generalized int-​sign reported in Fischer (1974), discussed in Section 11.3.1) in some of the utterances of an American home-​signing child (age range: 2;10–​3;11), which systematically occurs not only in content interrogatives but also in exclamatives (“What! It didn’t bubble!”) and in referential expressions of locations (“the place where the puzzle goes”). 257

258

Meltem Kelepir

By pointing out the parallelism between the distribution of the flip gesture with the distribution of the int-​signs in other languages, they argue that the flip should be considered an int-​sign in this homesign system. They observe that most of the flip gestures in the child’s questions and exclamatives occur at the end of the sentence (81% and 75%, respectively). However, none of the flip gestures in the referential expressions of locations (three occurrences in the entire data) occurs at the end. They conclude that the flip gesture found in interrogatives and exclamatives functions as an illocutionary operator; i.e., it occurs at the periphery of the clause, marking it as a question or an exclamative, whereas it does not have such a function when used in a referential expression of location since when used in that function, the flip does not occur at the end of the clause, and thus, cannot be marking a peripheral functional category.16 See Franklin et al. (2011) for the details of their analysis.

11.3.3  Emergence of grammatical non-​manual markers for content interrogatives in young sign languages Kocab et al. (2011) report on a study, in which they investigate the use of non-​manual markers in content interrogatives in a young sign language, Nicaraguan Sign Language. The findings are based on elicitation tasks with seven younger and nine older signers. They compare the non-​manual markers (brow furrow, nose wrinkle, and chin lift, which are also gestures used in the hearing community) in the language of older signers and younger signers, and find that the brow furrow is used mostly by the younger signers, and that younger signers seem to use non-​manual markers more frequently than the older signers, but there is still significant variability suggesting an ongoing grammaticalization process.

11.3.4  Processing of content interrogatives In a study on sentence processing in ASL as a function of age of first language acquisition, Boudreault & Mayberry (2006) investigate this issue with content interrogatives, one of six constructions they study. Two sets of grammatical questions were used as stimuli: one set includes questions with WH O and WHY accompanied by the obligatory non-​manual marker; the second set contains questions with the non-​manual markers but without the int-​signs. The ungrammatical questions for each of these sets were formed by separating the wh-​non-​manual, or the int-​sign and the accompanying non-​manual marker, from the grammatical clause. The participants were asked to judge the grammaticality of the questions. They found that as delay in exposure to a first language increased, grammatical judgment accuracy decreased. The non-​native signers are reported to have made more errors on ungrammatical compared to grammatical examples, and to be slower in responding to ungrammatical examples than to grammatical ones. The authors also argue that their results show that the non-​manual markers in content interrogatives are phrasal and clausal rather than lexical. See also Section 11.2.1.10 for a discussion of Geraci & Cecchetto (2013) and Alba (2016), where the distribution of int-​signs is argued to be related to processing constraints.

11.4  Conclusion Research on content interrogatives in sign languages has shown that the grammar of these constructions is as complex as that of better studied spoken languages: word order

258

259

Content interrogatives

interacts with information structure, prosody interacts with syntax, and each prosodic component may serve a distinct function. Further research will undoubtedly contribute to the experimental questions such as acquisition of prosody and the effect of age of acquisition on an individual’s grammar as well as to the theoretical issues such as presence vs. absence of movement, linearization, interaction of grammar with processing constraints, and more broadly, the nature of the interaction of the components of grammar in the overall architecture.

Acknowledgments I thank the anonymous reviewer and the editors for their invaluable help in improving this chapter. Needless to say, all errors are mine. The work on this chapter has been partially supported by the SIGN-​HUB project, which has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 693349.

Notes 1 The title of the chapter uses the language-​neutral term ‘content interrogative’ for the type of interrogative that is the focus here. In the text, I  also use another language-​neutral term, viz. ‘interrogative signs’, instead of ‘wh-​words/​phrases’, but I also use the term wh-​when no language-​ neutral term is readily available. 2 I try to remain as faithful as possible to the authors’ conventions in glossing and representing the non-​manual marking, since in most of the cases, these reflect the authors’ analyses. Nevertheless, this causes some inconsistency of representation throughout the chapter among the examples copied from different works. 3 Neidle et al. (2000) also discuss that some variants of structures such as (7) may be acceptable due to the perseveration of the manual or non-​manual features of the initial wh-​topic throughout the question. Example (i) illustrates a case in which there are two occurrences of the sign ‘W H AT’ . The non-​ dominant hand retains the handshape of ‘W HAT ’ while the rest of the question is articulated with the dominant hand (‘manual perseveration’), finishing with the full two-​handed articulation of the sign. wh -​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​--​ ​-​-​-​-​-​-​-​-​-​-​ wh

(i)

‘W HAT ’, J OH N

(1h)LOVE

‘WH AT’   

[dominant hand] hand] (ASL, Neidle et al. 2000: 118)

‘W HAT ’ -​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​ ‘WH AT’    [non-​ dominant

‘What, what does John love?’

However, as a variant of (i), the final ‘W HAT ’ may also be articulated solely by the perseveration of the non-​dominant hand. In (ii) below, the dominant hand is involved only in the articulation of the initial int-​sign; however, the non-​dominant hand holds the handshape of the sign until the end of the question. wh -​-​-​-​-​-​-​-​-​-​-​-​-​-​-​--​ ​-​-​-​-​-​-​-​-​-​-​-​ wh

(ii)

‘W HAT ’, J OH N LIK E ‘W HAT ’ -​-​-​-​-​-​-​-​-​-​-​-​-​ ‘WH AT’

‘What, what does John like?’

[dominant hand] [non-​dominant hand] (ASL, Neidle et al. 2000: 118)

According to the authors, (ii) is an acceptable structure, whereas a variant in which the non-​ dominant hand does not perseverate until the end of the question would not be, as in (iii).

259

260

Meltem Kelepir

wh -​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​ wh

(iii) * ‘W H AT’, JOH N LIK E [dominant hand] ‘W H AT’        [non-​dominant hand]

(ASL, Neidle et al. 2000: 118)

This order is only possible if the non-​manual marking is intense in the same position where the manual int-​sign would otherwise have appeared and the final manual sign, LI K E in this example, is held, as shown in (iv) below. wh -​-​-​-​-​--​ ​-​-​-​-​-​-​-​-​-​-​-​-​-​ [intense wh-​marking]

(iv)

-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​​ ‘What is it that John likes?’ ‘W HAT’, JOH N LIK E

(ASL, Neidle et al. 2000: 189)

4 Fischer (2006: 179) suggests that many of the s-​final int-​signs in Neidle et al.’s (2000) data may actually be a tag sign, W E L L . 5 Baker & Padden (1978), Baker-​Shenk & Cokely (1980), and Baker-​Shenk (1983) are the earliest reports on the non-​manual markers of ASL content interrogatives. 6 See Neidle (2002) for a detailed discussion of similarities between non-​ interrogative constructions with focused phrases and interrogatives with s-​final int-​signs in terms of co-​ occurrence restrictions and non-​manual marking and the distribution of an indefinite focus particle. 7 Recall from Section 2.1.1 that the LMA analysis predicts the possibility of structures such as (14) in ASL, while the RMA researchers report that these configurations are not possible. 8 See also Geraci (2009) for a discussion of LIS interrogatives as constituting a challenge for the linearization accounts in Chomsky (2000) and Fox & Pesetsky (2005). 9 See Nunes & Quadros (2006: 468-​470) for the discussion of a potential counterexample where both of the phrasal copies are at the edges of the clause, and Nunes & Quadros (2006: 473) for their analysis of the grammatical configuration where a phrasal copy is at the left edge whereas a copy of the wh-​determiner is at the right edge of the clause: (i)

WH I C H B O O K JO HN BUY Y E ST E R DAY W HI C H

‘Which book exactly did John buy yesterday?’

(Libras, Nunes, & Quadros 2006: 473)

10 See also Makaroğlu (2012) for an analysis of TİD interrogatives along the lines of Nunes & Quadros (2006). 11 See Abner (2011:  29-​30) for arguments for the topichood of the initial constituent, for the existence of the null copula as the head of FocP, and for the existence of declarative cleft constructions in ASL. See also Wilbur (1996) and Wilbur & Patschke (1999: 26f.) for a discussion of embedded wh-​clefts in ASL that co-​occur with brow raise. 12 See Branchini et al. (2013: 179-​182) for the discussion of another doubling construction, which they call ‘improper wh-​duplication’ and which involves a sign they gloss as Q -​A RT ıC H O K E . This sign is not identical to the other int-​sign in the question and occurs s-​finally. 13 The authors (Cecchetto et al. 2009: 309-​314) also discuss prosodic marking of wh-​dependencies in spoken languages (Deguchi & Kitagawa 2002; Ishihara 2002; Richards 2006) and point out the parallelisms between intonational patterns and non-​manual markers. See Alba (2010, cited in Wilbur 2011: 150) for an analysis of LSC interrogatives along the lines proposed in Cecchetto et al. (2009) and Lillo-​Martin & Quadros (2010) for a criticism of Cecchetto et al.’s claim that sign languages in general may be marking wh-​dependencies with non-​manual markers. 14 Nunes & Quadros (2006) report that, in contrast to the ASL facts, indirect questions in Libras allow doubling, which leads the authors to conclude that Libras must allow focus in embedded clauses, as well. 15 This generalized wh-​sign is also attested in Franklin et  al.’s (2011) study (their ‘flip form’) discussed in Section 3.2 and described as ‘two palms down rotate to palms up, reminiscent of ASL ‘what’. 16 The authors discuss the possibility of analyzing what they call referential expressions of location as free relatives. Example (i) illustrates the construction:

260

261

Content interrogatives

(i)

flip-​C OWB OY -​flip-​point at cowboy behind the experimenter’s back ‘The place where the cowboy is at is here.’ (3;00, #145) (adapted from Franklin et al. 2011: 407)

References Aarons, Debra. 1994. Aspects of the syntax of American Sign Language. Boston, MA:  Boston University PhD dissertation. Aarons, Debra, Benjamin Bahan, Judy Kegl, & Carol Neidle. 1992. Clausal structure and a tier for grammatical marking in American Sign Language. Nordic Journal of Linguistics 15. 103–​142. Abner, Natasha. 2011. WH-​words that go bump in the right. In Mary Byram Washburn, Katherine McKinney-​Bock, Erika Varis, Ann Sawyer, & Barbara Tomaszewicz (eds.), Proceedings of the 28th West Coast Conference on Formal Linguistics [WCCFL], 24–​32. Somerville, MA: Cascadilla Proceedings Project. Aboh, Enoch Oladé. 2004. Left or right? A view from the Kwa periphery. In David Adger, Cécile de Cat, & George Tsoulas (eds.), Peripheries:  Syntactic edges and their effects, 165–​189. Dordrecht: Kluwer. Aboh, Enoch Oladé & Roland Pfau. 2011. What’s a wh-​word got to do with it? In Paola Benincà & Munaro Nicola (eds.), Mapping the left periphery:  The cartography of syntactic structures, 91–​124. Oxford: Oxford University Press. Aboh, Enoch Oladé, Roland Pfau, & Ulrike Zeshan. 2005. When a wh-​word is not a wh-​word: The case of Indian Sign Language. In Tanmoy Bhattacharya (ed.), The yearbook of South Asian languages and linguistics, 11–​43. Berlin: Mouton De Gruyter. Alba, Celia. 2010. Les interrogatives-​Qu en llengua de signes catalana (LSC):  Bases per a una anàlisi. Barcelona: University of Barcelona MA thesis. Alba, Celia. 2016. Wh-​questions in Catalan Sign Language. Barcelona: Universitat Pompeu Fabra PhD dissertation. Bahan, Benjamin. 1996. Non-​manual realization of agreement in ASL. Boston, MA:  Boston University dissertation. Baker, Charlotte & Carol Padden. 1978. Focusing on the nonmanual components of American Sign Language. In Patricia A. Siple (ed.), Understanding language through sign language research, 27–​58. New York: Academic Press. Baker-​Shenk, Charlotte. 1983. A micro-​analysis of the non-​manual components of questions in American Sign Language. Berkeley, CA: University of California PhD dissertation. Baker-​Shenk, Charlotte & Dennis Cokely. 1980. American Sign Language: A teacher’s resource text on grammar and culture. Silver Spring, MD: T.J. Publishers. Bouchard, Denis & Colette Dubuisson. 1995. Grammar, order and position of wh-​signs in Quebec Sign Language. Sign Language Studies 87. 99–​139. Boudreault, Patrick & Rachel I. Mayberry. 2006. Grammatical processing in American Sign Language: Age of first-​language acquisition effects in relation to syntactic structure. Language and Cognitive Processes 21. 608–​635. Branchini, Chiara, Anna Cardinaletti, Carlo Cecchetto, Caterina Donati, & Carlo Geraci. 2013. Wh-​duplication in Italian Sign Language (LIS). Sign Language & Linguistics 16(2). 157–​188. Bresnan, Joan. 1971. Sentence stress and syntactic transformations. Language 47(2). 257–​281. Büring, Daniel. 2003. On D-​trees, beans, and B-​accents. Linguistics and Philosophy 26. 511–​545. Caponigro, Ivano & Kathryn Davidson. 2011. Ask, and tell as well:  Question-​answer clauses in American Sign Language. Natural Language Semantics 19(4). 323–​371. Cecchetto, Carlo. 2012. Sentence types. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 292–​315. Berlin: De Gruyter Mouton. Cecchetto, Carlo, Carlo Geraci, & Alessandro Zucchi. 2009. Another way to mark syntactic dependencies: The case for right peripheral specifiers in sign languages. Language 85(2). 278–​320. Cheng, Lisa Lai-​Shen. 1991. On the typology of wh-​questions. Cambridge, MA:  MIT PhD dissertation. Cheng, Lisa Lai-​Shen. 2009. Wh-​in-​situ, from the 1980s to now. Language and Linguistics Compass 3(3). 767–​791. Cheng, Lisa Lai-​Shen & Johan Rooryck. 2000. Licensing wh-​in-​situ. Syntax 3. 119.

261

262

Meltem Kelepir Chomsky, Noam. 1977. On wh-​movement. In Peter W. Culicover, Thomas Wasow, & Adrian Akmajian (eds.), Formal syntax, 71–​132. New York: Academic Press. Chomsky, Noam. 1986. Barriers. Cambridge, MA: MIT Press. Chomsky, Noam. 1995. The minimalist program. Cambridge, MA: MIT Press. Chomsky, Noam. 2000. Minimalist inquiries: The framework. In Roger Martin, David Michaels, Juan Uriagereka, & Samuel Jay Keyser (eds.), Step by step: Essays on minimalist syntax in honor of Howard Lasnik, 89–​156. Cambridge, MA: MIT Press. Chomsky, Noam. 2001. Derivation by phase. In Michael Kenstowicz (ed.), Ken Hale: A life in language, 152. Cambridge, MA: MIT Press. Chomsky, Noam. 2008. On phases. In Robert Freidin, Carlos P. Otero, & Maria Luisa Zubizarreta (eds.), Foundational issues in linguistic theory: Essays in honor of Jean-​Roger Vergnaud, 133–​166. Cambridge, MA: MIT Press. Churng, Sarah. 2006. Synchronizing modalities: A model for synchronizing gesture and speech as evidenced by American Sign Language. In Donald Baumer, David Montero, & Michael Scanlon (eds.), Proceedings of the 25th West Coast Conference on Formal Linguistics (WCCFL), 14–​22. Somerville, MA: Cascadilla Proceedings Project. Churng, Sarah. 2007. Double constructions in ASL: Resolved by resumption. Poster presented at the 81st Annual Meeting of the Linguistic Society of America. Anaheim, CA. Churng, Sarah. 2011. Syntax and prosodic consequences in ASL:  Evidence from multiple wh-​ questions. Sign Language & Linguistics 14(1). 9–​48. Citko, Barbara. 2013. The puzzles of wh-​questions with coordinated wh-​pronouns. In Theresa Biberauer & Ian Roberts (eds.), Challenges to linearization, 295–​329. Berlin: Mouton de Gruyter. Dachkovsky, Svetlana & Wendy Sandler. 2009. Visual intonation in the prosody of a sign language. Language and Speech 52(2/​3). 287–​314. Deguchi, Masanori & Yoshihisa Kitagawa. 2002. Prosody and wh-​questions. In Masako Hirotani (ed.), Proceedings of NELS (North-​Eastern Linguistic Society) 32, 73–​92. Amherst, MA: GLSA. den Dikken, Marcel. 2011. The distributed morphology of code-​switching. Paper presented at the UIC Bilingualism Forum, University of Illinois, Urbana-​Champagne. Fischer, Susan D. 1974. The ontogenetic development of language. In Erwin W. Straus (ed.), Language and language disturbances, 22–​43. Pittsburgh: Duquesne University Press; distributed by Humanities Press, New York. Fischer, Susan D. 2006. Questions and negation in American Sign Language. In Ulrike Zeshan (ed.), Interrogative and negative constructions in sign languages, 165–​197. Nijmegen: Ishara Press. Fox, Danny & David Pesetsky. 2005. Cyclic linearization of syntactic structure. Theoretical Linguistics 31(12). 1–​45. Franklin, Amy, Anastasia Giannakidou, & Susan Goldin-​Meadow. 2011. Negation, questions, and structure building in a homesign system. Cognition 118(3). 398–​416. Geraci, Carlo. 2009. Phase theory, linearization and zig-​zag movement. In Kleanthes K. Grohmann (ed.), Explorations of phase theory: Interpretation at the interfaces, 133–​159. Berlin: Mouton de Gruyter. Geraci, Carlo & Carlo Cecchetto. 2013. Neglected cases of rightward movement: When wh-​phrases and negative quantifiers go to the right. In Gert Webelhuth, Manfred Sailer & Heike Walker (eds.), Rightward movement in a comparative perspective, 211–​241. Amsterdam: John Benjamins. Gökgöz, Kadir. 2010. Licencing questions in Turkish Sign Language. West Lafayette, IN: Purdue University paper. Gökgöz, Kadir & Engin Arık. 2011. Distributional and syntactic characteristics of nonmanual markers in Turkish Sign Language. Proceedings of 7th Workshop on Altaic Formal Linguistics, 63–​78. Cambridge, MA: MIT Working Papers in Linguistics. Göksel, Aslı & Meltem Kelepir. 2011. The syntax of nested spreading domains. Poster presented at Formal and Experimental Advances in Sign Language Theory (FEAST), University of Venice. Göksel, Aslı & Meltem Kelepir. 2013. The phonological and semantic bifurcation of the functions of an articulator:  Head in questions in Turkish Sign Language. Sign Language & Linguistics 16(1).  1–​30. Gračanin-​Yüksek, Martina. 2007. About sharing. Cambridge, MA: MIT PhD dissertation. Hakgüder, Emre. 2015. Complex clauses with embedded constituent interrogatives in Turkish Sign Language (TİD). Istanbul: Bogazici University MA thesis.

262

263

Content interrogatives Halle, Morris & Alec Marantz. 1993. Distributed morphology and the pieces of inflection. In Ken Hale & Samuel Jay Keyser (eds.), The view from Building 20: Essays in linguistics in honor of Sylvain Bromberger, 111–​176. Cambridge, MA: MIT Press. Hawkins, John A. 2004. Efficency and complexity in grammars. Oxford: Oxford University Press. Higginbotham, James. 1993. Interrogatives. In Ken Hale & Samuel Jay Keyser (eds.), The view from Building 20:  Essays in linguistics in honor of Sylvain Bromberger, 195–​227. Cambridge, MA: MIT Press. Huang, Cheng-​ Teh James. 1982. Logical relations in Chinese and the theory of grammar. Cambridge, MA: MIT PhD dissertation. Hoza, Jack, Carol Neidle, Dawn MacLaughlin, Judy Kegl, & Benjamin Bahan. 1997. A unified syntactic account of rhetorical questions in American Sign Language. In Carol Neidle, Dawn MacLaughlin, & Robert G. Lee (eds.), Syntactic structure and discourse function: An examination of two constructions in American Sign Language, 123. American Sign Language Linguistic Research Project report 4. Boston: Boston University. Ishihara, Shinichiro. 2002. Invisible but audible wh-​ scope marking:  Wh-​ constructions and deaccenting in Japanese. In Line Mikkelsen & Christopher Potts (eds.), Proceedings of the 21st West Coast Conference on Formal Linguistics, 180–​193. Somerville, MA: Cascadilla Press. Kegl, Judy, Carol Neidle, Dawn MacLaughlin, Jack Hoza, & Benjamin Bahan. 1996. The case for grammar, order and position in ASL: A reply to Bouchard and Dubuisson. Sign Language Studies 90. 123. Kocab, Annemarie, Jennie E. Pyers, & Ann Senghas. 2011. The emergence of grammatical markers for questions in Nicaraguan Sign Language: Child or adult driven? Poster presented at the Annual Meeting of the Society for Research on Child Development (SRCD), Montreal. Lasnik, Howard & Mamoru Saito. 1994. Move alpha:  Conditions on its application and output. Cambridge, MA: MIT Press. Lillo-​Martin, Diane. 1990. Parameters for questions: Evidence from American Sign Language. In Ceil Lucas (ed.), Sign language research: Theoretical issues, 211–​222. Washington, DC: Gallaudet University Press. Lillo-​Martin, Diane. 2000. Aspects of the syntax and acquisition of wh-​questions in American Sign Language. In Karen Emmorey & Harlan L. Lane (eds.), The signs of language revisited: An anthology to honor Ursula Bellugi and Edward Klima, 401–​413. Mahwah, NJ: Lawrence Erlbaum. Lillo-​Martin, Diane & Susan D. Fischer. 1992. Overt and covert wh-​questions in American Sign Language. Paper presented at the Fifth International Symposium on Sign Language Research, Salamanca. Lillo-​Martin Diane, Helen Koulidobrova, Ronice Müller de Quadros & Deborah Chen Pichler. 2012. Bilingual language synthesis: Evidence from wh-​questions in bimodal bilinguals. In Alia K. Biller, Esther Y. Chung, & Amelia E. Kimball (eds.), Proceedings of the 36th Boston University Conference on Language Development, 302–​314. Somerville, MA: Cascadilla Press. Lillo-​ Martin, Diane & Ronice Müller de Quadros. 2004. Structure and acquisition of focus constructions in ASL (American Sign Language) and LSB (Lingua Sinais Brasileira). Paper presented at Theoretical Issues in Sign Language Research 8 (TISLR 8), Barcelona. Lillo-​Martin, Diane & Ronice Müller de Quadros. 2005. The acquisition of focus constructions in American Sign Language and Lingua Sinais Brasileira. In Alejna Brugos, Manuella R. Clark-​Cotton, & Seungwan Ha (eds.), Proceedings of the 29th Boston University Conference on Language Development, 365–​375. Somerville, MA: Cascadilla Press. Lillo-​Martin Diane & Ronice Müller de Quadros. 2006. The position of early wh-​elements in American Sign Language and Brazilian Sign Language. In Kamal Ud Deen, Jun Nomura, Barbara Schulz, & Bonnie D. Schwartz (eds.), The Proceedings of the Inaugural Conference on Generative Approaches to Language Acquisition-​North America, Honolulu (vol.1), 195–​203. University of Connecticut Occasional Papers in Linguistics 4. Cambridge, MA: MIT Working Papers in Linguistics. Lillo-​ Martin, Diane & Ronice Müller de Quadros. 2010. Interfaces and wh-​ questions in sign languages. Paper presented at 20th Colloquium on Generative Grammar, Barcelona. Lillo-​Martin, Diane, Ronice M. de Quadros, & Deborah Chen Pichler. 2016. The development of bimodal bilingualism: Implications for linguistic theory. Linguistic Approaches to Bilingualism 6(6), 719–​755.

263

264

Meltem Kelepir MacSwan, Jeff. 2000. The architecture of the bilingual language faculty:  Evidence from intrasentential code switching. Bilingualism: Language and Cognition 3(1). 37–​54. Makaroğlu, Bahtiyar. 2012. Türk İşaret Dilinde soru:  Kaş hareketlerinin dilsel çözümlemesi. [Questions in Turkish Sign Language: A linguistic analysis of brow movement]. Ankara: Ankara University MA thesis. Meir, Irit. 2004. Question and negation in Israeli Sign Language. Sign Language & Linguistics 7(2). 97–​124. Meir, Irit & Wendy Sandler. 2008. Language in space:  The story of Israeli Sign Language. Haifa: University of Haifa Press. Neidle, Carol. 2002. Language across modalities: ASL focus and question constructions. Linguistic Variation Yearbook 2. 71–​98. Neidle, Carol, Judy Kegl, & Benjamin Bahan. 1994. The architecture of functional categories in American Sign Language. Cambridge, MA: Harvard University Linguistics Colloquium. Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, & Robert G. Lee. 2000. The syntax of American Sign Language: Functional categories and hierarchical structure. Cambridge MA: MIT Press. Nunes, Jairo. 2004. Linearization of chains and sideward movement. Cambridge, MA: MIT Press. Nunes, Jairo and Ronice Müller de Quadros. 2004. Phonetic realization of multiple copies in Brazilian Sign Languages. Paper presented at Theoretical Issues in Sign Language Research 8 (TISLR 8), Universitat de Barcelona. Nunes, Jairo & Ronice Müller de Quadros. 2006. Duplication of wh-​elements in Brazilian Sign Language. In Leah Bateman & Cherlon Ussery (eds.), Proceedings of the 35th Annual Meeting of the North East Linguistic Society, 463–​478. Amherst: GLSA. Petronio, Karen. 1993. Clause structure in American Sign Language. Seattle:  University of Washington PhD dissertation. Petronio, Karen & Diane Lillo-​Martin. 1997. Wh-​movement and the position of Spec-​CP: Evidence from American Sign Language. Language 73(1). 18–​57. Pfau, Roland. 2006. Wacky, weird, or widespread? Wh-​questions without wh-​words. Paper presented at 2nd Sign Language Week, Vitoria. Reilly, Judy. 2006. How faces come to serve grammar: The development of nonmanual morphology in American Sign Language. In Brenda Schick, Marc Marschark, & Patricia Elizabeth Spencer (eds.), Advances in the sign language development of deaf children, 262–​290. Oxford:  Oxford University Press. Reilly, Judy Snitzer & Marina McIntire. 1990. Child-​directed language in ASL: Taking expressions at face value. Paper presented at symposium “Affective and linguistic functions of Motherese: Cross-​ language research on early language input” at the International Conference on Infant Studies. Montreal, Quebec. Reilly, Judy Snitzer & Marina McIntire. 1991. W HER E S H O E : The acquisition of wh-​questions in ASL. Papers and Reports in Child Language Development 30. 104–​111. Rizzi, Luigi. 1990. Relativized minimality. Cambridge, MA: MIT Press. Rizzi, Luigi. 1997. The fine structure of the left periphery. In Liliane Haegemann (ed.), Elements of grammar: A handbook of generative syntax, 281–​337. Dordrecht: Kluwer. Richards, Norvin. 2006. Beyond strength and weakness. Cambridge, MA: Massachusetts Institute of Technology paper. Roberts, Craige. 1998. Information structure in discourse:  Towards an integrated formal theory of pragmatics. Columbus, OH: Ohio State Working Papers in Linguistics. Sandler, Wendy & Diane Lillo-​Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press. Schalber, Katharina. 2006. What is the chin doing? An analysis of interrogatives in Austrian Sign Language. Sign Language & Linguistics 9(1). 133–​150. Suñer, Margarita. 1993. About indirect questions and semi-​questions. Linguistics and Philosophy 16.  45–​77. Watanabe, Akira. 2001. Wh-​in-​situ languages. In Mark Baltin & Chris Collins (eds.), The handbook of contemporary syntactic theory, 203–​225. Oxford: Blackwell. Watson, Katharine L. 2010. Wh-​ questions in American Sign Language:  Contributions of nonmanual marking to structure and meaning. West Lafayette, IN:  Purdue University MA thesis.

264

265

Content interrogatives Wagner, Michael. 2005. Prosody and recursion. Cambridge, MA: MIT PhD dissertation. Weast, Traci. 2008. Questions in American Sign Language: A quantitative analysis of raised and lowered eyebrows. Arlington, TX: University of Texas at Arlington PhD dissertation. Wilbur, Ronnie B. 1995. What the morphology of operators looks like:  A formal analysis of ASL brow-​raise. In Leslie Gabriele, Debra Hardison, & Robert Westmoreland (eds.), FLSM VI:  Formal Linguistics Society of Mid-​America (vol. 2), 67–​78. Bloomington, IN:  Indiana University Linguistics Club. Wilbur, Ronnie B. 1996. Evidence for the function and structure of wh-​clefts in American Sign Language. In William H. Edmondson & Ronnie B. Wilbur (eds.), International Review of Sign Linguistics, 209–​256. Mahwah, NJ: Lawrence Erlbaum. Wilbur, Ronnie B. 2011. Nonmanuals, semantic operators, domain marking and the solution to two puzzles in ASL. Sign Language & Linguistics 14(1). 148–​178. Wilbur, Ronnie B. & Cynthia Patschke. 1999. Syntactic correlates of brow raise in ASL. Sign Language & Linguistics 2. 341. Wood, Sandra. 2007. Multiple wh-​questions in ASL. Talk presented at the 81st Annual Meeting of the Linguistic Society of America, Anaheim, CA. Zeshan, Ulrike. 2003. Indo-​ Pakistani Sign Language grammar:  A typological outline. Sign Language Studies 3. 157–​212. Zeshan, Ulrike. 2004. Interrogative constructions in sign languages: Cross-​linguistic perspectives. Language 80(1). 739. Zeshan, Ulrike. (ed.) 2006. Interrogatives and negative constructions in sign languages. Nijimegen: Ishara Press.

265

266

12 NEGATION Theoretical and experimental perspectives Kadir Gökgöz

12.1  Introduction There is not a single human language without a linguistic way of expressing negation. That is to say, negation is a universal component of the Human Language Capacity; no matter whether a language surfaces through the oral or the manual channel. Thus, it is no surprise that negation has been among the first research topics that sign language researchers either directly pursued or used as a diagnostic for their argumentation concerning other aspects of sign language grammar (Liddell 1980; Padden 1983; Veinberg & Wilbur 1990). By now, various valuable contributions have been made by investigating the morphology, syntax, semantics, prosody, acquisition, and neurolinguistics of negation in the field of sign language research. The sheer number of available studies makes it difficult to choose what to highlight within the space allotted for this chapter. Nonetheless, an effort will be made to review the state of the art of research on sign language negation in a concise yet comprehensive way. I will start by addressing theoretical perspectives on the syntax of negation (Section 12.2). Here, I first review the syntactic position of the negative head (Neg°) and, when relevant, the position of its specifier across various sign languages (Section 12.2.1). So far, researchers have generally depended on the position of negation vis-​à-​vis the verb, modals, aspectual markers, and elements in the Complementizer Phrase domain. I will summarize these discussions and review the data in light of a syntactic universal, the Final-​Over-​Final Constraint, which is a novel perspective on the issue. In Section 12.2.2, I turn to non-​manual markers of negation and the lexical and morpho-​syntactic roles that they are argued to play. Generative work on sign language negation has led to recent formal approaches to typological variation, which I will review in Section 12.2.3. Subsequently, in Section 12.3, I  address experimental perspectives on sign language negation. Section 12.3.1 provides an overview of how children acquire negation, and Section 12.3.2 covers another developmental topic, namely the development of negation in homesign. Finally, Section 12.3.3 offers a review of neurolinguistic studies of sign language negation. Section 12.4 concludes the chapter.

266

267

Negation

12.2  Theoretical perspectives Theoretical studies on sign languages generally rely on (i)  the position of negation in the overall clause structure of sign languages, and (ii) the role and distribution of non-​ manual markers. Both these aspects will be addressed in Sections 12.2.1 and 12.2.2, respectively. In our discussion of the word order facts, we will also discuss how far the attested patterns comply with a syntactic universal, the Final-​Over-​Final Constraint (FOFC; Biberauer et al. 2014). In Section 12.2.3, two recent formal typological studies will be presented.

12.2.1  Position of negation in the clause structure The majority of sign languages come in one of the two basic word orders: SOV or SVO (see Napoli & Sutton-​Spence (2014) for a recent review and discussion). In (1), we list possible positions that negation can take in SOV or SVO orders: (1)

Possible positions for negation based on SOV (a–​c) and SVO (d–​f) orders1 a.  S-​O-​V-​Neg d.  S-​Neg-​V-​O b. S-​Neg-​O-​V e. S-​V-​Neg-​O c. S-​O-​Neg-​V f. S-​V-​O-​Neg

In the following two subsections, I will evaluate the findings of previous research with respect to a syntactic universal, namely the Final-​Over-​Final Constraint, which is a novel contribution made in this review.

12.2.1.1  The Final-​Over-​Final Constraint In this section, I will introduce a syntactic universal and evaluate the findings on the position of negation in the literature with respect to this universal. According to Biberauer et al. (2014), when two neighboring heads in the same syntactic spine are considered, two of the possible four orders are syntactically harmonic while the other two are disharmonic. The two syntactically harmonic orders are those where the two relevant heads appear on the same side, either both on the right or both on the left. In (2a), both the Verb head and the Negation head are on the right, while in (2b), both the Verb head and the Negation head are on the left. These are the two cross-​linguistically allowed harmonic word orders. (2)

a.

Cross-linguistically allowed

Harmonic S

O

V

b.

Neg

Cross-linguistically allowed

Harmonic S Neg

V

O

There are two further logically possible orders between the Verb head and the Negation head. In both these orders, the two heads appear on different sides, and they are thus said to be disharmonic. Although both of these orders are disharmonic, the order in (3a), where the higher head Negation is on the left, is cross-​linguistically allowed, whereas the order where the higher negative head is on the right (3b) is cross-​linguistically disallowed. 267

268

Kadir Gökgöz

(3) a.

Cross-linguistically allowed

Disharmonic S Neg

O

b. Cross-linguistically disallowed

V

Disharmonic S

V

O

Neg

Accordingly, the Final-​over-​Final Constraint (FOFC) is formulated as “A head-​final phrase αP cannot dominate a head-​initial phrase βP, where α and β are heads in the same extended projection” (Biberauer et al. 2014: 171). In other words, given two neighboring heads, a higher final head can only be on top of a lower final head, thus the Final-​Over-​ Final Constraint. I will now evaluate the findings from several languages with respect to the FOFC.

12.2.1.2  SOV sign languages in light of the FOFC Italian Sign Language (Lingua dei Segni Italiana, LIS) and Turkish Sign Language (Türk İşaret Dili, TİD) are two of the SOV languages in which the basic negative marker follows the verb in sentence-​final position. I will neglect non-​manual markers for most of the examples in this section and only include them when they are pertinent to the discussion. (4)

GIANNI MARIA LOVE N OT

‘Gianni doesn’t love Maria.’ (5)

IX-​1  

KEL İME  

B İLMEK  

(LIS, Cecchetto et al. 2006: 949)

D E ğİL 2

I  word  know  N EG ‘I don’t know the word.’        

  (TİD, Gökgöz 2011: 69)

In both LIS and TİD, the basic negative marker is argued to occupy a structural position higher than and to the right of the verb (Geraci 2005; Cecchetto et al. 2006, 2009; Gökgöz 2011; Gökgöz & Arık 2011; Gökgöz & Wilbur 2016). However, the fact that negation is immediately to the right of the lexical verb in (4) and (5) does not necessarily mean that no other syntactic positions intervene between Neg° and V°. In fact, in both LIS and TİD, a modal or an aspectual marker is shown to occur between the verb and the manual negative marker. The examples in (6) illustrate this possibility with data from TİD (6a) and LIS (6b). (6)

BU   AK şAM   öD EV   HAZIRLAMAK   LAZIM   DE ğİL no  this  evening  homework  prepare  need  NEG ‘No, we don’t need to prepare some homework this evening.’ (TİD, Gökgöz 2011: 56)

a.

HAYIR  

b.

GIAN N I CON TRACT S IG N CAN N OT

‘Gianni cannot sign the contract.’   

(LIS, Cecchetto et al. 2015: 215)

Furthermore, the fact that negation is at the right edge of the sentences in (4) to (6) does not necessarily mean that negation occupies the highest position on the right in these SOV languages. Not only elements which are argued to be hosted by the specifier position 268

269

Negation

of the negative phrase, i.e., SpecNegP on the right, but also some other elements related to the Complementizer Phrase (CP) are observed to follow the basic negative marker in various SOV languages. Examples (7ab) show for Catalan Sign Language (LSC) and TİD, respectively, that a negative adverbial may appear to the right of the basic manual negative marker; this adverbial is argued to occupy SpecNegP on the right. (7)

a.

b.

INDE X 1 

FU MAR   N O   MAI I  smoke  N EG  never ‘I have never smoked.’               INDE X 1  İşARET   B İLMEK   D E ğİL   H İç I  sign  know  N EG   at.all ‘I didn’t know (how to) sign at all.’     

(LSC, Pfau & Quer 2007: 135)

      (TİD, Gökgöz 2011: 54)

(8), in turn, shows the position of negation vis-​à-​vis elements from the CP domain, a yes/​no question marker in TİD (8a) and a clause-​final index sign that is also assumed to be related to yes/​no question formation in TİD (8b). (8c) shows the position of negation with respect to PALM-​U P in TİD, and finally (8d) shows the position of negation with respect to a wh-​element in LIS. These examples thus indicate that negation can be dominated by still higher elements within the CP domain. (8)

a.

b.

öĝREN C İ  öD EV   YAPMAK   D E ğİL   SORU-​İşARET İ student  homework  do  N EG   Q-​M ARK ‘Didn’t the student do (her) homework?’   (TİD, Gökgöz & Wilbur 2016: 262) C üM LE   OK U MAK   B İLMEK   D E ĝİL   IX-​2 sentence  read  know  N EG   you ‘Don’t you know how to read a sentence?’  (TİD, Gökgöz 2011: 57)

c. ADAM

EKMEK

YAP

DEĜİL

AVUÇ-YUKARI

man

bread

make

NEG

palm-up

‘Isn’t the man making bread?’

            (TİD, my own fieldwork data, Purdue University, 2010) d.

C AKE EAT N OT WH O

‘Who did not eat the cake?’  

  (LIS, Cecchetto et al. 2015: 216)

So far, we reviewed three sign languages with SOV order, namely TİD, LIS, and LSC, where negation occupies a position to the right of the verb. Likewise, negation occurs to the right of the verb in still another sign language with SOV word order, namely German Sign Language (Deutsche Gebärdensprache, DGS), as shown in (9).               hs

(9)

M UTT E R  

BLU ME  

K AU F   N ICH T mother  flower  buy  N EG ‘Mother doesn’t buy a flower.’     

(DGS, adapted from Pfau & Quer 2002: 79) 269

270

Kadir Gökgöz

However, DGS differs from TİD, LIS, and LSC in that the basic clause negator, in this case treated as a negative adverb, occupies SpecNegP but not Neg°, as argued by Pfau & Quer (2002). They further argue that the verb moves to Neg° to carry the affixal [Neg] feature, which is realized as a side-​to-​side headshake. Nonetheless, it is still argued by Pfau & Quer (2002) that Neg° is on the right in DGS as well. Let us evaluate the findings so far with respect to the FOFC. All four sign languages addressed above show a so-​called “harmonic” word order with respect to the relative positions of negation and the verb, i.e., both V° and Neg° are on the right, as represented in (2a) above following Biberauer et al. (2014). Austrian Sign Language (Österreichische Gebärdensprache, ÖGS) is yet another SOV language (Schalber 2006). In my own data elicitation with an ÖGS signer, I observed that negation precedes the object in this language, that is, the order is S-​Neg-​O-​V, as in (10): (10)

M AN NOT BREAD BAK E

‘The man is not baking bread.’                 (ÖGS, my own fieldwork data, Purdue University, 2010) Although the S-​Neg-​O-​V word order is disharmonic with respect to the relative positions of negation and the verb –​Neg° being on the left while V° appears on the right (see (3a) above) –​it is still a cross-​linguistically attested disharmonic word order, in contrast to the other disharmonic order in which Neg° appears on the right and V° on the left (i.e., S-​V-​O-​Neg as in (3b) above).

12.2.1.3  SVO sign languages in light of the FOFC In this subsection, we will evaluate findings from sign languages with a basic S-​V-​O order. Remember that the harmonic word order for such languages is to have a left-​headed NegP on top of the left-​headed verb phrase, resulting in S-​Neg-​V-​O, as we represented in (2b) above. We will see below that this harmonic word order indeed occurs in S-​V-​O sign languages. However, we will first consider data that is disharmonic, thus violating the FOFC. Remember that, according to the FOFC, the disharmonic word order in (3b) is predicted not to occur due to the head-​finality of negation while the verb phrase is head-​ initial. However, it seems as if Hong Kong Sign Language (HKSL) allows this word order, see (11). I did not come across any other sign language that allows this word order as the basic word order for negation. (11)

INDE X-​3 H AVE MON EY N OT

‘It is true that he has no money.’

(HKSL, Tang 2006: 218)

Interestingly, Tang (2006) notes that occasionally preverbal negation is used with common verbs such as COME, as shown in (12). (12)

INDE X-​3 N OT COME, BU S Y

‘He won’t come. He is busy.’

(HKSL, Tang 2006: 221) 270

271

Negation

Tang (2006) further notes that preverbal negation is more commonly used by younger signers. She suggests that this is perhaps due to the influence of Cantonese, where negation is preverbal. Alternatively, one could argue that the pattern is not due to Cantonese but results from language change. Children using HKSL might be re-​regulating the disharmonic order of negation with respect to the verb, thus making it harmonic. This hypothesis requires further research. Nonetheless, the existence of a harmonic order alongside the disharmonic, FOFC-​violating order is worth noting. The disharmonic word order in (11) noted, data from other sign languages should be interpreted with due caution. That is to say that not all surface instances of S-​V-​O-​Neg in a sign language are necessarily the result of a disharmonic organization in syntax. For instance, S-​V-​O-​Neg appears to be observed in ASL in sentences like (13) (‘br’ = brow raise).            

(13)

              br

STAY HOME ALL-​D AY EVERYDAY CAN ’T

‘I can’t stay home all day every day.’ (ASL, adapted from Wilbur & Patschke 1999: 23) However, despite the surface S-​V-​O-​Neg order, (13) is not disharmonic with respect to the FOFC. Wilbur & Patschke (1999) argue that the S-​V-​O-​Neg order in (13) results from a derivation wherein V° and Neg° do not share the same syntactic spine at the end of the derivation. The derivation in question is a focus construction whereby the element in Neg° is attracted to C° on the right by a strong F(ocus) feature on C°, and the rest of the sentence moves from the complement position of this focus projection (i.e., from IP) to SpecCP due to a strong P(reposing) feature on C°. The derivation is linearized as S-​V-​ O-​Neg. Crucially, however, in syntax, the final configuration of the derivation does not violate the FOFC since the syntactic spine which includes negation does not include the verb in the final syntactic configuration (the tree representation in (14) is adapted from Wilbur & Patschke (1999)). (14)

CP

Spec [IP tCAN’T STAY HOME ALL-DAY EVERYDAY]

C’ tIP

C

CAN’T

So far, we observed S-​O-​V-​Neg and S-​Neg-​O-​V orders based on the S-​O-​V order in Section 12.2.1.2, in addition to the derived S-​V-​O-​Neg word order based on the basic S-​V-​O order in this section. Another possible derived word order is S-​O-​Neg-​V. To the best of my knowledge, this word order is not observed in any sign language that has S-​ O-​V as its basic word order. However, this word order is sometimes observed in a sign language that has underlying S-​V-​O order but allows for the re-​ordering of the object and the verb under certain conditions, as is the case in ASL. First, observe that the basic negation marker N OT occurs between the subject and the verb in this language, both with a plain verb (15a) and a morphologically heavy (agreeing/​directional) verb 271

272

Kadir Gökgöz

(15b).3 These orders are harmonic, S-​Neg-​V-​O, according to the FOFC, as represented in (2b) above. (15) a. b.

J OHN N OT LIK E MARY

‘John does not like Mary.’ J OHN N OT a-​H ELP -​b MARY ‘John didn’t help Mary.’       

(ASL, Quadros & Lillo-​Martin 2010: 250)

However, it has been observed that the S-​O-​V order is allowed in ASL when the predicate of the sentence is morphologically special  –​carrying a classifier morpheme (16a) or a directionality/​agreement morpheme (16b) (e.g., Fischer 1975; Liddell 1980; Matsuoka 1997; Braze 2004; Sandler & Lillo-​Martin 2006; Quadros & Lillo-​Martin 2010; Gökgöz 2013). (16) a. SOV example with a classifier morpheme on the verb M AN BAS EBALL CATCH -​C L (round) ‘The man caught the baseball.’ b. SOV example with directionality/​agreement morphemes on the verb STU D EN T -​i OTH ER -​j S TU D EN T -​j i-​L OOK-​A T -​j ‘The student looked at the other student.’    (ASL, Gökgöz 2013: 77) In the derived S-​O-​V order, negation has to occupy the position between the object and the verb (17ac) while positioning negation between the subject and the object, as in (17bd), leads to ungrammaticality. (17) a.

M AN BAS EBALL N OT CATCH -​C L( round)

‘The man didn’t catch the baseball.’ b. * M AN N OT BAS EBALL CATCH -​C L( round) c. STU D EN T -​i OTH ER -​j S TU D EN T -​j NOT i-​L OOK-​A T -​j ‘The student didn’t look at the other student.’ d. * STU D EN T-​i N OT OTH ER -​j S TU D ENT -​j i-​L OOK-​AT -​j (ASL, Gökgöz 2013: 93, 95) Note that the S-​O-​Neg-​V order in (17ac) is still harmonic since both the verb and the negation appear on the left. It is the position of the object that changes. In the derived S-​O-​V order, negation can also occupy the position to the right of the verb, i.e., S-​O-​V-​ Neg, as shown in (18).                          

(18) a.

  br

  hs

M AN BAS EBALL CATCH -​C L -​round 

NOT

‘It is not the case that the man caught the baseball.’                  

b.

                  br

STU D EN T -​i OTH ER -​j S TU D EN T -​j

   hs

i-​L OOK-​AT -​j  NOT ‘It is not the case that the student looked at the other student.’ (ASL, Gökgöz 2013: 101)

272

273

Negation

Similar to the previous ASL example in (13), the element in Neg° moves to the C° position on the right in (18), and subsequently, the IP, which has been re-​ordered as SOV due to the nature of the verb (which triggers object movement), moves to SpecCP. Therefore, three syntactic operations are needed to derive the S-​O-​V-​Neg order from the basic S-​ Neg-​V-​O order in ASL. Still, the ultimate configuration does not violate the FOFC since negation and the verb do not share the same syntactic spine.

12.2.1.4  Other distributions of negation in a sentence and the FOFC Having discussed the various word order possibilities with a single negative element in a sentence, we now turn to a review of two special uses of negation: Negative Concord and contrastive use.4 Negative Concord (NC), where two negative elements co-​occur in a clause but do not cancel each other, is realized in two ways in sign languages. In the first type, both negative elements are the same; this pattern is referred to by the special term negative doubling, and it often has the semantic function of emphasis. In the second type of NC, two different negative elements co-​occur in the clause, which again do not cancel each other but do not contribute a semantic effect. In negative doubling, the same negative element occurs once in some position internal to the clause and once in clause-​final position. In (19), we provide an example of negative doubling from Croatian Sign Language (Hrvatski Znakovni Jezik, HZJ). (19)

IX-3-i

BAKER-i

IX-3-i

NOT

BAKE

BREAD

NOT

‘The baker didn’t bake the bread.’

                          (HZJ, my own fieldwork data, Purdue University, 2010) Doubling also occurs in other sign languages, such as ASL and Brazilian Sign Language (Língua Brasileira de Sinais, Libras) (Petronio 1993; Quadros 1999; Lillo-​Martin & Quadros 2008). As a matter of fact, Zeshan (2006a) reports that negative doubling is observed in many of the 15 sign languages in her sample that have preverbal negation. For the HZJ sentence in (19), it is not known whether doubling has a semantic effect. However, for ASL, it has been argued that doubling is used for emphatic focus (Lillo-​ Martin & Quadros 2008), as illustrated in example (20). (20)

IX-​1 NO GO PARTY N O

‘I absolutely didn’t go to the party.’

(ASL, Quadros & Lillo-​Martin 2010: 240)

Note that if the negative elements in (20) were in the positions that they seem to occupy at the surface  –​as also argued by authors such as Petronio (1993)  –​the order would

273

274

Kadir Gökgöz

violate the FOFC since the C° accommodating the final instance of negation would be on the right while the verb and the preverbal instance of negation would be on the left. The account proposed by Lillo-​Martin & Quadros (2008) resolves this issue. According to them, a sentence-​final double is used for emphatic focus, following the same function as has been proposed for ASL by Petronio (1993) and for Libras by Quadros (1999). However, in contrast to Petronio (1993), Lillo-​Martin & Quadros (2008) argue that the functional head that hosts the double is not a functional head on the right but an E(mphatic)-​Focus head in the left periphery. The sentence starts out with the usual S-​V-​O order. During the derivation, the negative element is attracted to this E-​Focus°, and then the rest of the sentence remnant-​moves to a higher position (see Kimmelman & Pfau, Chapter 26, for a syntactic structure). Subsequently, some further morphological fusion operation applies which makes possible the spell-​out of multiple copies (Nunes & Quadros 2008), and thus both copies of negation are spelled-​out. Similar to what Wilbur & Patschke (1999) suggested (see the structure in (14) above), ultimately, the copy of negation that surfaces at the right edge of the sentence is in a different syntactic spine than the verb and the preverbal negation; thus a violation of FOFC is avoided. In principle, the HZJ example in (19) could be explained along similar lines, and would thus not violate the FOFC. However, at present, it is not clear whether the double is used for emphasis, and it can therefore not be determined whether a mechanism similar to the one suggested for ASL and Libras is operative in HZJ. As pointed out above, another type of NC involves two different negative elements co-​occurring in the same sentence; this type is illustrated by the following ASL sentence from Wood (1999). (21)

J OHN N OT BREAK FAN N OTH IN G

‘John didn’t break any part of the fan.’

(ASL, Wood 1999: 76)

For (21), Wood (1999) proposes that N OT H I N G functions as a determiner that modifies the meaning of the NP, FA N . If so, the verb and N OT H I N G would not share the same syntactic spine and thus would not violate the FOFC, although N OT H I N G occupies the sentence-​final position on the surface. Independent of details of their syntax, which we will not pursue here any further, examples like (21) show that NC with two different negative elements in the same sentence is attested in a sign language. We will come back to NC as a useful domain of another kind of formal typological inquiry in Section 12.2.3. Another special use of negation is for contrast. Contrast in ASL can be encoded by the use of space and body movements (Wilbur & Patschke 1999). According to Wilbur & Patschke (1999), negation focus is signaled by a backward body lean whereas affirmation focus is marked by a lean forward. Also, using the right and left areas of the signing space equally works for contrastive negation, that is, similar to the use of right and left areas for Parallel Focus, as described in Wilbur & Patschke (1999: 296–​297).5 One might argue that at a more abstract level, it is the use of opposite areas in space that is exploited for contrastive purposes, which in turn can be employed by several grammatical mechanisms. In the case of contrastive negation, the physical space helps to divide the semantic space into two domains: one where the proposition holds of the constituent anchored to that domain, and another which creates the opposition, that is, a domain where the proposition does not hold of the constituent anchored to that domain. Example (22), from ASL, illustrates the use of negation and space for contrast.

274

275

Negation



(22)

        hs             hn lean right          lean left

N OT CAR -​i MOTORCYCLE -​j ‘The man didn’t buy a car but a motorcycle.’       M AN BUY

    (ASL, Gökgöz 2013: 97)

(22) suggests that with the proper non-​manual markers, contrastive negation can be expressed in situ in ASL. This option is also noted by Neidle & Lee (2005), who offer the example in (23).  

(23)

J OHN LIKE

    focus

N OT BAG EL D ON U T

‘John likes not bagels but donuts.’      

(ASL, Neidle & Lee 2005)

This in turn implies that the position of negation in (22) and (23) is internal to the constituent that is being targeted by negation, that is, the contrasted DP. If so, the syntactic spine of the negative element and the verb is different and therefore, no issue of FOFC arises. This subsection showed that although sign languages show different positions of negation, overall, they still obey the universal syntactic constraint FOFC, with the (apparent) exception of HKSL (at least for older signers), which should be investigated in more syntactic detail. In the following subsection, we will review morphosyntactic and prosodic discussions based on non-​manual markers of negation.

12.2.2  Non-​manual markers So far, we focused on the position of manual markers of negation in various sign languages. However, descriptions of the expression and linguistic structure of negation in sign languages also benefited, to a great extent, from the linguistic behavior of non-​ manual marking. In general, non-​manual markers have been argued to play lexical, morphological, syntactic, and prosodic/​intonational roles (see Pfau & Quer (2010) for a review; also see Wilbur, Chapter 24), and non-​manual markers accompanying negation are no exception to these various functions (see Quer (2012) for a review). Although a side-​to-​side headshake and, for some sign languages, a backwards head-​tilt have been observed in all sign languages described so far, the scope of these non-​manual markers in relation to manual signs, as well as their behavior in the absence of a manual negative marker, has made it possible for researchers to offer precise accounts of the linguistic nature of negation which go beyond a useful, albeit mainly descriptive, division into manual dominant and non-​manual dominant sign languages (cf. Zeshan 2004, 2006a).6 These analyses range from the relevant non-​manual marker being a lexical feature of the manual negative marker, through the non-​manual marker being an affixal/​morphological feature that needs to be attached to a manual segment, to the non-​manual marker encoding the syntactic (c-​command or some other) domain of Neg° (Neidle et al. 2000; Pfau & Quer 2002; Cecchetto et al. 2009; Gökgöz 2009, 2011; Hrastinski 2010; Goodwin 2013; among others). We will introduce these options in this section as a way to set the stage for discussions of formal typologies in the next session. When a non-​manual marker is argued to be part of the lexical description of a sign, this means that it is one of the phonological parameters that is required during the

275

276

Kadir Gökgöz

production of a sign (the others being handshape, place of articulation, movement, and orientation; see van der Hulst & van der Kooij, Chapter 1). One such non-​manual marker is backwards head-​tilt (‘bht’) observed in TİD. In this language, the non-​manual marker accompanies at least the manual negative marker, and it tends to regressively spread onto the predicate that the negative marker cliticizes to (Zeshan 2004; Gökgöz 2009), similar to local spreading of other lexical non-​manuals –​not restricted to negation per se –​observed in other sign languages (Crasborn et al. 2008, among others); see (24) for an example. (24)

IX-1 I

YAP do

DEĜİL NEG

  ‘I didn’t do (it).’     (TİD, Gökgöz 2011: 61; image © John Benjamins, reprinted with permission) When fulfilling a lexical function, the non-​manual marker is local and is only lexically relevant, without showing any morphosyntactic contribution. However, the physically identical non-​manual marker is observed to occasionally spread beyond the lexical verb in Greek Sign Language (GSL), as shown in (25). For this reason, observing one and the same non-​manual marker in two sign languages does not necessarily mean that it behaves in the same way in those two languages. In other words, non-​manuals should be treated case by case in each sign language.      

(25)

                bht

INDEX- ​1 AG AIN G O N OT-​B

‘I won’t go (there) again.’     

  (GSL, Antzakas 2006: 265)

The precaution to treat the same-​looking non-​manual markers of negation carefully for their own sake is valid not only for backwards head-​tilt but also for side-​to-​side headshake, a very common non-​manual marker accompanying negation across sign languages. What looks like the same non-​manual marker on the surface is proposed to fulfill different functions depending on how a certain sign language makes headshake part of its grammar. For instance, in ASL, the non-​manual marker of negation is observed to display the following distributions.

276

277

Negation

  hs

(26) a.

J OHN [NegP 

NOT [VP BU Y H OU S E ]

           

b.

J OHN [NegP NOT [VP BU Y H OU S E ]

           

c.

]

          hs

]

      hs

J OHN [NegP [VP BU Y H OU S E ]

] ‘John is not buying a house.’  (ASL, adapted from Neidle et al. 2000: 44–​45)

In contrast to backwards head-​tilt in TİD, which is a lexical component of the basic negative sign, side-​to-​side headshake is argued to play a syntactic role in ASL (Neidle et al. 2000; Pfau & Quer 2002). First, side-​to-​side headshake can occur with only the manual negative marker to mark the locus of Neg° in syntax in ASL (26a). Second, side-​to-​side headshake is also observed to optionally spread over the syntactic c-​command domain of the Neg°, namely over VP (26b). This piece of data is the first indication that headshake is a syntactic marker in ASL. This is further supported by the fact that headshake alone can mark a sentence as negative even in the absence of a manual negative marker (26c). In this scenario, though, the headshake needs to spread over the entire VP in ASL, as in (26c); it cannot accompany only the verb (27), as inferred from Neidle et al.’s (2000) discussion. hs

(27) *J OHN [NegP [VP  BU Y H OU S E ]] ‘John is not buying a house.’     

(ASL, based on Neidle et al. 2000: 45)

The availability of the distribution in (27) as a grammatical option in other sign languages, such as DGS, has led researchers to propose that the role of side-​to-​side headshake is not uniformly that of marking a syntactic domain. Rather, it can also work as a morphological feature that can be expressed on the verb alone. According to this line of reasoning, the negative headshake associates with the lexical verb after V-​to-​Neg movement thanks to the affixal character of the non-​manual marker in this language (Pfau & Quer 2002), as illustrated by the DGS example in (28), where the headshake only accompanies the verb (still spreading over the object is optionally possible).                       

(28)

P OSS 1 B ROTH ER WIN E  

  hs LIK E

‘My brother doesn’t like wine.’   

    (DGS, adapted from Pfau 2016: 53)

Lastly, it should be noted that side-​to-​side headshake does not necessarily have a syntactic or morphological function in every sign language in which it occurs as a marker of clausal negation. It is also observed as a lexical component of some manual negative signs, as reported, for instance, in Zeshan (2004, 2006b) and Gökgöz (2009, 2011) for TİD. This overview of non-​manual markers of negation shows that despite surface similarities, the function of one and the same non-​manual marker may differ from one sign language to the other, and therefore, non-​manual markers of negation should be studied in detail for each language under investigation. One way of conducting such a detailed investigation is by way of formal comparisons between sign languages, as we will discuss in the next subsection.

277

278

Kadir Gökgöz

12.2.3  Formal approaches to typological issues A first attempt at descriptive typological work on the distribution of negation was offered by Zeshan (2004, 2006a). She divides sign languages into two groups with respect to the distribution of manual and non-​manual markers: manual dominant sign languages and non-​manual dominant sign languages. According to Zeshan (2004, 2006a), a manual dominant sign language obligatorily requires a manual marker to negate a sentence, and the relevant non-​manual marker accompanies only the manual marker of negation. In contrast, a non-​manual dominant sign language does not obligatorily require a manual negative marker to negate a sentence, since a non-​manual negative marker is capable of negating a sentence on its own. Zeshan further observes that in non-​manual dominant sign languages, the non-​manual marker is not restricted to the negative sign only, but may spread over other signs in a sentence.

12.2.3.1  Goodwin (2013): a formal syntactic typology based on where [+neg] occurs Building on Zeshan’s (2004, 2006a) insights, Goodwin (2013) reviews further typological data and proposes a formal syntactic typology of negation in sign languages. First, she observes that when a manual negative sign is not obligatory in a sign language, that sign language requires the non-​manual marker to spread. This spreading marks the syntactic/​ semantic domain/​scope of negation in such languages. In such a situation, the relevant negative feature, {+neg}, is in NegP only, as represented in (29), adapted from Goodwin (2013: 19). NegP{+neg}

(29) Spec

obligatory spread of non-manual marker in the absence of a manual negative marker

Neg’ Neg

VP

(29) explains ASL data, such as example (26c) above, where there is no manual negative marker and the non-​manual marker of negation, i.e., side-​to-​side headshake, obligatorily spreads over the entire VP. However, there is another option for non-​manual dominant sign languages. Goodwin (2013) proposes that some sign languages may realize the {+neg} feature in the negative head Neg° as well. In such cases, the manual marker is argued to occupy Neg° and realize the {+neg} feature. Since the {+neg} feature is thus realized, the spread of the non-​manual marker is not obligatory, but it can optionally take place as represented in (30), adapted from Goodwin (2013:  18). (30) explains the optional spreading of headshake observed in the ASL examples (26a) and (26b).

278

279

Negation NegP{+neg}

(30) Spec

optional spread of non-manual marker in the presence of a manual negative marker

Neg’ Neg

VP

NOT{+neg}

manual marker realizes the {+neg} feature

Goodwin (2013) further observes that if the manual negative marker is obligatory, this manual marker needs to occupy a syntactic position to mark the scope of negation. Goodwin reports that in such a situation, the non-​manual marker can optionally spread and redundantly mark the syntactic domain of negation in some sign languages such as Japanese Sign Language (Morgan 2006) and TİD (Gökgöz 20117). The feature distribution in (30) can explain such sign languages as well. The optional spread of the non-​manual marker is due to the optional {+neg} feature in NegP. However, as Goodwin (2013) notes, in other manual dominant sign languages, the non-​manual marker for negation cannot spread and can thus not redundantly mark the domain of negation –​as is true, for instance, in Jordanian Sign Language (Hendriks 2008). In such sign languages, there cannot be a {+neg} feature in NegP. The tree in (31), based on Goodwin’s (2013) proposal, accounts for such sign languages (the tree is right-​branching but nothing would change if it were left-​branching). NegP

(31) Spec

spreading of non-manual marker does not occur since there is no {+neg} here

Neg’ Neg

VP

NOT{+neg}

obligatory manual marker

Lastly, Goodwin (2013) addresses DGS, a non-​manual dominant sign language. First observe the following data which shows that side-​to-​side headshake needs to accompany at least the verb, even when the manual negative sign is present; consequently, (32a) is ungrammatical, while (32b) with headshake on NOT and the verb is well-​formed. Also observe that the non-​manual may optionally spread over the object (32b).   hs

(32) a. * POSS 1 BROTH ER b.

WIN E LIK E 

NOT

  hs

(  hs)

POSS 1 BROTH ER WIN E LIK E

(N OT)

‘My brother doesn’t like wine.’  

c.

POSS 1 BROTH ER  

hs (

D OCTOR  

hs)

(N OT)

‘My brother is not a doctor.’     

279

(DGS, Pfau 2016: 53; Pfau 2008: 48)

280

Kadir Gökgöz

Goodwin (2013) agrees with Pfau & Quer (2002) in placing NOT in SpecNegP on the right in DGS, and she also follows them in assuming that this sign is lexically specified for a headshake. However, in contrast to Pfau & Quer (2002, 2007), Goodwin (2013) argues that there is no verb movement to Neg° in this language. Rather, she proposes that there is a {+neg} feature in NegP. This feature needs to spread to mark the domain of negation; therefore, it obligatorily spreads onto the verb and the object. Goodwin then proposes that the object can optionally move somewhere to the left above NegP, which explains the optional spread in (32b). She argues against Verb-​to-​Neg movement based on examples such as (32c), which involves the non-​verbal predicate DOCTOR , assuming that there is no obvious reason for the non-​verbal predicate to move to Neg°. Goodwin’s (2013) work successfully shows that a formal syntactic typology is possible based on negation data from several sign languages that are now available. Another successful formal typology comes from Pfau’s (2016) work.

12.2.3.2  Pfau (2016): a formal syntactic typology based on feature values Following Zeijlstra (2004, 2008), Pfau (2016) offers a formal syntactic typology of negation in terms of feature values associated with negative elements. Based on Zeijlstra’s classification, he proposes that DGS is a strict negative concord (NC) language while TİD is a non-​strict NC language. In addition to these two types, Pfau (2016) also discusses what a double negation sign language –​the third type in Zeijlstra’s classification –​might look like but concludes that to date no such sign language has been attested. Pfau adopts Zeijlstra’s (2004, 2008) definition of what determines whether a language is a strict or a non-​strict NC language based on the co-​occurrence of two negative elements (which, by definition, do not cancel each other). For example, in Polish, negative words (n-​words, such as English nothing or nobody) obligatorily co-​occur with the basic clause negator, and thus Polish is categorized as a strict NC language. However, in Italian, sentential negation only co-​occurs with an n-​word when the latter follows the former in the sentence, but not when the n-​word is in subject position. Thus, Italian is categorized as a non-​strict NC language. Pfau (2016) further cites Zeijlstra (2004, 2008) in proposing the following definition of NC: (33)

NC is an Agree relation between a single feature [iNEG] and one or more features [uNEG].

In this system, n-​words are argued to be semantically non-​negative, and accordingly bear an uninterpretable negative feature, [uNeg]. This part of the argument accounts for the fact that, in NC languages, sentential negation and the n-​word do not cancel each other, as the n-​word is not semantically negative while the Neg° agreeing with the n-​word is semantically negative. Another component of Zeijlstra’s (2004, 2008) proposal is that the element that contributes an interpretable negative feature, [iNeg], to the structure can be a covert element, that is, a Neg-​operator. Zeijlstra (2004, 2008) further argues that in strict NC languages, both the Neg° and the n-​word carry an uninterpretable negative feature, [uNeg], and that the covert negative operator with the interpretable negative feature, [iNeg], multiply agrees with both of these elements. Thus, the negative operator is responsible for the negative meaning in such a situation. In contrast, in a non-​strict NC language, such as Italian, the Neg° can itself carry the interpretable negative feature, [iNeg], and thus license the negative meaning. 280

281

Negation

Pfau (2016) then argues that DGS is a strong NC language. First, he notes that side-​ to-​side headshake is the morphosyntactic expression of negation, but since it is affixal in nature, the verb needs to move to Neg° so that the affixal headshake can attach to the verb. The Neg° is then proposed to have an uninterpretable negative feature, [uNeg]. Since this feature needs to be eliminated during the derivation for it not to cause an interpretability issue when shipped off to the meaning component, it is assumed that a covert negative operator with an interpretable negative feature, [iNeg], enters into an Agree relation and deletes the [uNeg] on the [V+Neg°] complex in the absence of a manual negative sign. This is exemplified in (34a).

(34) a.

b.

POSS 1 BROTH ER

[TP SUBJECT

   hs     WIN E    LIKE [NegP [vP [OBJECT tV]] [Neg° V+hs[uNeg]] Op¬[iNeg]]]

POSS 1 BROTH ER

WIN E

[TP SUBJECT

hs

hs

LIKE

NOT

[NegP [vP [OBJECT tV]] [Neg° V+hs[uNeg]] NEG[iNeg]]] (DGS, adapted from Pfau 2016: 57)

Remember that Pfau (2016) assumes that a Neg-​operator occupies SpecNegP, as in (34a), when there is no manual marker of negation in DGS. When the manual negative marker is present, as in (34b), Pfau (2016) argues that it occupies this position and contributes the interpretable negative feature, [iNeg], which then enters into an agree relation with the uninterpretable negative feature, [uNeg] of the [V+Neg°] complex as illustrated in (34b). According to Pfau (2016), what makes DGS a strict NC language is the fact that the two negative features are obligatorily present in the structure, that is, [uNeg] in Neg° and [iNeg] in SpecNegP, no matter whether this position is occupied by NOT or a covert Neg-​operator. These negative features do not cancel each other; rather, there is an agree relation between them. Pfau (2016) then turns to TİD and argues that this sign language is a non-​strict NC language, meaning there is no need to have more than one negative feature/​element in the structure, but when there are two of them, the structure is still grammatical. First, observe data from TİD which attests to Pfau’s (2016) proposal, based on original data from Zeshan (2006b) and Gökgöz (2011). (35) shows that the basic negative marker NOT appears to the right of the verb. This manual marker is argued to occupy Neg° (Gökgöz 2011), and according to Pfau (2016), this is the locus of an interpretable negative feature, [iNeg], like the negative particle in Italian. Since the feature value of the manual particle is interpretable, this negative particle is sufficient to negate the sentence on its own: (35)

INDEX 1 BANANA FRON T TH ROW N OT [iNeg]

‘I did not throw the banana to the front.’

(TİD, adapted from Gökgöz 2011: 60)

While the negative particle with an interpretable negative feature, [iNeg], is sufficient to negate the sentence on its own in (35), it can optionally be accompanied by some n-​words in different positions, as exemplified in the examples in (36). Each of these optional n-​ words introduces an uninterpretable negative feature, [uNeg], which is checked against the interpretable negative feature, [iNeg], associated with the negative particle in Neg°. 281

282

Kadir Gökgöz

(36) a.     IN D EX 1  S IG N   K NOW   NOT   [TP SUBJECT  [NegP [vP OBJECT  V]  [Neg° NEG[iNeg]]  b.

AT-​A LL

N-​word[uNeg]]]

‘I didn’t know how to sign at all.’ WAT ER S EA   IN D EX 1  NONE   GO   NOT [XP OBJECT  [TP SUBJECT [NegP [vP N-​word[uNeg] V] NEG[iNeg]]]] ‘I never went to the seaside at all.’               (TİD, adapted from Gökgöz 2011: 69; Zeshan 2006b: 150; representation after Pfau 2016)

The fact that two (or more) negative elements do not cancel each other makes TİD an NC language, and the fact that the second negative element is only optionally present is what makes TİD a non-​strict NC language, as Pfau (2016) proposes. This concludes our review of theoretical perspectives on negation in sign languages. In the next section, we will review experimental issues.

12.3  Experimental perspectives This section is devoted to a review of experimental work on negation in sign languages. I will start in Section 12.3.1 by reviewing a study that investigates how children acquire negation. This will be followed in Section 12.3.2 by a study on the development of negation in a context wherein a homesigner develops a language-​like system on his own without a language model to work with. Lastly, we will review a neurolinguistic study on negation in Section 12.3.3.

12.3.1  Acquisition of negation by Deaf children learning ASL In two studies, Anderson & Reilly (1997) examine how children develop linguistic use of negation. In particular, they investigate the developmental relation between communicative and grammatical uses of negative headshakes. The studies reveal that children start to use communicative headshakes very early on around age 1;0. Nonetheless, the authors show that the communicative use of headshakes does not automatically translate into linguistic use. Before children integrate headshake into their grammar of negation, they first need to acquire the manual marker for negation; that is, ASL NOT . Children also acquire other manual markers of negation, but all these markers are first used without the accompanying non-​manual marker. It takes between 1–​8 months for the Deaf children to combine headshake and manual markers in a linguistic way. Finally, children use the non-​manual marker consistently and accurately. Thus, Anderson and Reilly’s (1997) study shows that communication and grammar are two separate systems. Here are the details of their study. Anderson & Reilly (1997) report that the timing/​scope of the non-​manual marker with respect to manual material is one criterion for differentiating the linguistic use of the non-​manual marker from its communicative use, the other two criteria being consistency and the overall intensity. Citing Baker-​Shenk (1983), the authors note that a grammatical headshake needs to start and end with manual signs over which negation has scope, as in (37).8 282

283

Negation

(37)

INDEX-1 LIKE CHOCOLATE

‘I don’t like chocolate.’ (Linguistic use with a sharp onset, a consistent body, and a sharp offset) (ASL, Baker-Shenk 1983, cited in Anderson & Reilly 1997: 415) This consistent use of non-​manual expression of negation contrasts with inconsistent use of communicative headshakes whose onset and offset do not align with the manual signs and which are not characterized by a constant intensity; rather, their intensity may fluctuate during the production of accompanying signs and speech (as it is a communicative gesture used in speech as well; cf. Kendon (2002)); this pattern is illustrated in (38), based on Liddell’s (1980) observations. Another distinguishing criterion is whether the headshake can be used on its own. In that case, it is argued to be communicative rather than linguistic, again showing a wavy/​fluctuating distribution rather than a clear-​ cut categorical one: (38) (Communicative use of (isolated) headshake with a non-sharp onset, fluctuating body, and a non-sharp offset) (Baker-Shenk 1983,cited in Anderson & Reilly 1997: 415)

Although there might be a common semantic source between communicative and linguistic use of negation, Deaf children need to differentiate these two uses to learn the rule-​governed linguistic use of negation. At issue here is the development of three relevant components:  the communicative headshake, the linguistic headshake, and the manual negative signs. Based on these components, Anderson & Reilly (1997) entertain the following three hypotheses. (i)

If all of these components were to be operative in the same overall cognitive domain, then the transition between these three would be predicted to be continuous. (ii) If language is an independent system, the communicative use of negation and the manual marker of negation would develop separately, and the linguistic non-​ manual marker would be acquired later than the manual marker of negation in line with other research on the acquisition of other morphological aspects of ASL (Newport & Meier 1986, as cited in Anderson & Reilly 1997). (iii) The linguistic use of headshake is first learned as a part of lexical signs of negation. Then the child separates the non-​manual marker from the individual manual markers and treats the non-​manual as a productive morpheme; subsequently, the child will apply it across the board in negative sentences after some transition and overgeneralization errors. In Study 1, they analyzed cross-​sectional data from 51 Deaf children and found 500 negative signs, some with headshake and some without. Four hundred headshakes were found to be produced in isolation, “often in response to a question or request” (Anderson & Reilly 1997: 419). They identified eight manual markers, i.e., NOT, DON’ T-​WANT, DON’ T-​ L IKE, DON’T-​K NOW, N O, N ON E, CAN ’T , and NOT-​Y ET, and carefully investigated the coordination of manual and non-​manual markers.

283

284

Kadir Gökgöz

At first, children used communicative headshakes and manual signs independently. They produced a headshake on its own in response to requests and questions and used seven of the eight manual markers of negation initially without an accompanying headshake; only gradually, each manual was accompanied by a headshake. The time between using a manual negator and using it with headshake varied between 1–​8 months. That is, children showed a delay in integrating a negative headshake with a manual negative marker. These findings support the second hypothesis and refute the first and third hypotheses. In order to confirm these results, the authors conducted a further longitudinal study with 16 of the 51 children from the first study. They found 200 negative signs (with and without headshake) and 100 headshakes in isolation. They then compared two different ages for each child conducting a binomial test which considered whether children’s use of a non-​manual marker with a manual negator changed between these two ages. Overall, children were found to improve between these two points in time in their correct productions of the headshake with a manual sign. The authors conclude that the results from the second study also support hypothesis 2, namely “children relied on the manual channel for the expression of negation before the non-​manual signal emerged and was correctly integrated with the manual components” (Anderson & Reilly 1997: 424). In other words, “the communicative headshake does not generalize directly to the emerging linguistic system, suggesting that communicative and grammatical headshakes are mediated by two separate systems” (p. 425). In this context, it would be interesting to investigate the development of negation in a manual dominant sign language. Given that in manual dominant sign languages, the manual and non-​manual marker always co-​occur, and the non-​manual generally does not spread (see Section 12.2.3), children might immediately analyze the manual and non-​ manual component as an indecomposable unit. In this way, a study on such a language might be a way of checking the third hypothesis above.9

12.3.2  Negation in a homesign system A homesign system is a communication system created by a deaf child without access to a conventional linguistic model for that child to work with. Such a situation may occur when the hearing parents of a deaf child do not know a sign language and thus cannot provide accessible primary linguistic input that would allow the child to acquire a sign language as a native language. Franklin et al. (2011) examined the homesign system of a child in the United States, whom they call David. They found that the system created by David involved the expression of negation: 11% of the sentences David produced were negative sentences, a total of 327 sentences. The authors first describe what kind of negative meanings David produced. This is followed by an investigation of the forms he used to express negative meanings. They conclude their report by documenting the positions of negation in David’s multi-​gesture sentences and offer some discussions based on their findings. As for negative meanings, Franklin et al. (2011) distinguish three types that are present in the early language of children learning English: rejection, denial, and non-​existence (Bloom 1970). They report that David expressed all of these three meanings in his productions. (39) and (40) are examples of rejection, (41) is an example of denial, and (42) is an example of non-​existence.

284

285

Negation

(39) side-​to-​side headshake –​point to bag 1 –​point to bag 2 ‘No, I don’t want bag 1, I want bag 2.’  (David at 3;10; Franklin et al. 2011: 401) In the context in which (39) was uttered, the experimenter was playing on the floor with David, and she offered him a bag of toys. However, David did not want this bag but another bag. In order to express rejection, David first shakes his head side-​to-​side, followed by pointing to the bag that he does not want, and finally he points to the bag that he wants. David also expresses a rejection on the actions of others. In (40), David does not want the experimenter to put on a mask, and he expresses his rejection of this action with a manual gesture accompanied by a side-​to-​side headshake. side-​to-​side headshake

(40)

PUT-​O N-​M AS K

‘Don’t put on mask.’

(David at 3;08; Franklin et al. 2011: 401)

Franklin et al. (2011) coded a gesture sentence as a denial when “the sentence asserts that an actual or supposed proposition is not the case” (p. 401). In the context of (41), where negation is used with a denial meaning, David and the experimenter are looking at photos together. Looking at his brother’s picture, David gestures and tells the experimenter that his brother is at school. The experimenter points to David, and David “responds by pointing to his chest while shaking his head and then points again to the door [meaning school], glossed as I did not go to school” (Franklin et al. 2011: 402). side-​to-​side headshake

(41)

point to self –​   point to door ‘I did not go to school.’

(David at 3;03; Franklin et al. 2011: 402)

Note that (41) is an example of denial of an action. Franklin et al. (2001) report that David denied states and similarity between two objects as well. According to Franklin et  al. (2011:  402), non-​existence/​non-​occurrence expressions are “about the absence of an object or action whose presence is expected in that context” and “typically include an element of surprise indicating a violation of expectation”. (42) is an example of non-​existence. David is looking at a picture of two ice-​cream cones. One cone has ice cream in it while the other one does not. David expresses the non-​existence of ice cream in the second cone and his surprise about this situation in (42).   side-​to-​side headshake

(42) point to cone without scoop of ice cream –​ICE-​C REAM  –​ flip ‘There is no ice cream!’ (David at 3;10; Franklin et al. 2011: 402) Taken together, examples (39) to (42) show that David used negation in all three functions, that is, for rejection, denial, and non-​existence/​non-​occurrence. How about the development of these functions over time? Franklin et  al. (2011) divided the sessions into two groups to check whether there was a developmental difference between the three types of negation. Parallel to what has been found in hearing children learning English (Bloom 1970) and Japanese (McNeill & McNeill 1968, as cited in Franklin et al. 2011), they found that David developed the non-​existence/​non-​occurrence function earlier than 285

286

Kadir Gökgöz

the denial function. Rejection was the major category for the function of negation; it occurred in 48% of David’s early negative productions and 66% of his later productions. Non-​existence/​non-​occurrence was observed in 48% of his early productions, but only in 10% of his later productions. The pattern was the reverse for denial. While only 4% of his early negative productions displayed a function of denial, 24% of his later productions displayed this function. These results show that the denial function in David’s production developed later than the rejection and non-​existence/​non-​occurrence functions. The authors thus conclude that David, who created his own homesign system, followed the same developmental trajectory as children learning language with input from a conventional linguistic system. This indicates that the developmental trajectory, rejection–​non-​ existence–​denial, is independent of language, but it is tied to the social environment and the children’s cognitive development, in line with Pea’s (1980) ideas. The authors then proceed to discussing the forms of negation that David developed. Although David did not receive input from an established sign language, he interacted with his parents and observed the gestures that they produced. Obviously, a side-​to-​side headshake is a gesture that is commonly used by his parents to express negative meaning. The authors found that David used this gesture in 276 out of 327, that is, in 84% of all his negative sentences. David also used some manual gestures only to express negation, such as a palm swipe away from the body, a manual flip (aka PALM-​U P or WELL ), or a shrug of the shoulders. But these manual forms were not found to be as frequent as the side-​to-​side headshake, the main negator for David. In order to determine the sentential position of the headshake, Franklin et al. (2011) looked at its position in multi-​gesture sentences –​excluding cases in which the headshake occurred on its own or with a flip only, and cases in which the headshake spread over all of the accompanying gestures. David produced 83 headshakes in multi-​gesture sentences. Seventy-​nine percent of these headshakes appeared at the beginning of the sentence; see (41) above for an example. In the remaining 21%, i.e., significantly less frequently, the headshake occurred at the end of the sentence; see example (42). Note that although non-​manual negation occurs at the beginning or end of the sentence, and sometimes co-​temporal with an entire sentence, in David’s productions, Franklin et al. (2001) did not observe negation in the middle of the sentence to negate only a constituent. That is, the main function of negation in the productions of David seems to be propositional/​sentential. The authors discuss in some detail the implications of the sentence-​initial position of negation in David’s productions. They argue that although sentence-​initial negation has usually an anaphoric function in languages (answering a question or objecting to previous discourse; but not negating a sentence on its own), they did not observe an anaphoric effect in the productions of David despite the fact that his negative gesture was primarily sentence-​initial. They then regard the position of negation as taking scope over a proposition and note that the placement of negation in the periphery is also consistent with the logic of negation. Furthermore, sentence-​initial placement of negation is observed in children’s speech, too, as shown in (43). (43) No good No Leila have a turn No sunny outside No to the bathroom? No over

(Drozd 1995: 1, cited in Franklin et al. 2011: 411) 286

287

Negation

Franklin et al. (2011) point out that the sentence-​initial “No” in the child data in (43) is not anaphoric, but rather is used as bona fide sentential negation. They conclude that David’s sentence-​initial side-​to-​side headshake is of the same nature, negating the sentence/​proposition rather than functioning as anaphoric negation. Although David does not have a conventional system, it may be worthwhile to compare his productions to patterns attested in natural sign languages. One could argue for a pattern that resembles those sign languages which realize the non-​manual expression of negation in a more restricted/​local area in the sentence. Such sign languages may or may not have a manual negative marker (as, for instance, TİD vs. DGS; see Section 12.2.2), but the fact that non-​manual negation tends to be local in both is in parallel with David’s use of negation. Taken together, this study shows that negation develops in a homesign system as a structure building mechanism even in the absence of a conventional language model. This in turn attests to the universality of negation, although the meanings associated with negation may derive from more general cognitive and social factors. While this was not the primary concern of the authors, the reviewer drew my attention to the fact that one could also investigate the implications of the scope of headshake over David’s homesign strings. In this regard, an interesting finding is that David appears not to use negation in a contrastive sense, targeting only a constituent in a sentence rather than the entire proposition. This contrasts with what we observe in, for instance, ASL (see example (23) above, which involves contrastive negation). Lastly, Franklin et al. (2001) note that when the results from their study are compared to those from Anderson & Reilly’s (1997) study, one observes that both David and the Deaf children in the latter study initially use headshake for the expression of negation. Further, Anderson & Reilly observed a gap in the use of the side-​to-​side headshake once the manual markers start to emerge in children learning ASL as their native language. However, as Franklin et al. (2011) note, such a gap is not observed for David, who did not replace non-​manual marking of negation with manual markers.

12.3.3  Neurolinguistic evidence Atkinson et al. (2004) discuss the neurophysiology of negation in British Sign Language (BSL). They first note that BSL is a non-​manual dominant sign language:  the facial action of negation (“one or more short lateral headshakes, a furrowed brow, narrowed eyes, and downturned mouth, alone or in combination” (p.  251)) obligatorily occurs in negative sentences, while the manual negative marker is optional. Given the fact that syntactic processing is known to take place in the left hemisphere (LH), while prosodic and affective processing are more heavily lateralized in the right hemisphere (RH), the authors argue that if the facial action associated with negation is processed syntactically in BSL, it is expected to be intact in cases of unilateral RH lesions (cf. Emmorey 2002). First, Atkinson et al. (2004) review literature on the neurolinguistics background to their study, which is also useful to repeat here. They start by noting that previous studies on brain-​lesioned signers have usually concentrated on the production of facial expressions rather than their comprehension. Thus, there is a need for comprehension studies such as their own. Then, they note that the results from those production studies show that the LH is active in processing linguistic facial actions. For

287

288

Kadir Gökgöz

instance, Corina et al. (1999) found specific difficulties in the production of linguistic facial actions in LH-​lesioned patients, while Kegl & Poizner (1996) observed that LH-​ lesioned patients retained production of affective facial expressions. On the other hand, Corina et al. (1999) and Loew et al. (1997) found the opposite effect in signers with RH lesions. This dichotomy between RH and LH functioning being noted, Kegl & Poizner (1996) identified finer dissociations when studying one patient with aphasia, which, as is well-​known, results from damage to parts of the LH. Specifically, they found that this patient had impaired syntax:  the patient displayed specific difficulties in producing facial actions that mark syntactic structures, such as conditional clauses and questions. In contrast to these grammatical aspects, the patient was still able to use the facial action of negation. As Atkinson et  al. (2004) point out, this finding shows the need to put theories under scrutiny that argue that the facial action associated with negation is syntactic on a par with, for instance, non-​manual marking of conditionals and questions. Table 12.1  Predictions for the comprehension of non-​manual negation based on hemispheric side of lesion Left-​hemisphere lesioned signers If non-​manual Impaired processing of negation is negation expected under both non-​manual marks syntax only and non-​manual+manual conditions If non-​manual Comprehension of both non-​manual negation only and non-​manual+manual marks prosody negation should be intact

Right-​hemisphere lesioned signers There should not be any impairments in processing negation under both non-​ manual only and non-​manual+manual conditions Comprehension of non-​manual negation should be impaired while comprehension of manual negation should be intact

The studies cited by Atkinson et al. (2004) provide a basis for evaluation which can help in differentiating what kind of behavior the facial action of negation displays: syntactic or prosodic? The predictions are summarized in Table  12.1, based on Atkinson et al. (2004). In addition to the predictions listed in Table  12.1, Atkinson et  al. (2004) also note another possibility. This possibility is related to the idea that facial expression of negation is tied to a gestural source. The authors report that speakers, when they dislike something, may use a gesture that is similar to the negative facial action used by signers. If signers with brain-​damage made use of this gesture, different predictions would arise. This dimension brings along the possibility that brain-​damaged signers may also use iconic/​ gestural aspects of individual signs in their production and comprehension. Therefore, there is a chance to investigate this third possibility with lexical processing involving iconicity. The authors include this aspect in the linguistic background tests part of their study, which I will address below. Atkinson et al.’s (2004) experimental test on negation involved pairs of pictures one of which was the negative of the other (e.g., a dog with a bone and a dog without a bone). The experimenter would sign either a positive or a negative sentence, and the participant was asked to match the sentence with one of the two pictures. Half of the negative sentences included only the non-​manual marker of negation, the other half was 288

289

Negation

signed with a manual negative marker accompanied by the non-​manual marker. The control group, which consisted of 15 elderly signers, had an average correct score of 98.9% in choosing the correct picture based on the signed stimuli. Importantly, RH-​lesioned patients scored worse than LH-​lesioned patients in response to negative sentences that only included the non-​manual marker of negation. On the other hand, their scores were similar to the LF-​lesioned patients under the non-​manual+manual condition. Both groups performed reasonably well under this condition. The finding that RH-​lesioned patients had significantly lower scores under the non-​ manual only condition supports the proposal that the non-​manual marker of negation is prosodic rather than syntactic in nature in BSL. As Atkinson et al. (2004) note, their findings support Nespor and Sandler’s claim that non-​manual markers, at least in some sign language structures, have a prosodic function (Nespor & Sandler 1999; Sandler 1999; also see Wilbur, Chapter 24). They add that facial expressions may be associated with syntactic structure but are not of themselves syntactic […]. In such an analysis, the underlying syntactic marker of negation could be described as triggering a prosodic contour (non-​manual negation) and either a surface manual negation marker or a ∅ surface form. In the latter case, only the non-​manual negation would appear in the surface form of the sentence. (Atkinson et al. 2004: 225) This study thus contributes new experimental insights to the discussion of the function of non-​manual markers of negation, at least for BSL. It is yet to be tested by means of tests targeting comprehension in which hemisphere the relevant non-​manual markers of negation are lateralized in other sign languages which follow a non-​manual dominant pattern in the expression of negation. In addition to their conclusion that negative non-​manual marking is prosodic in BSL, Atkinson et al. (2004) also raise the possibility of a gestural function of negation. First, note that, at a lexical level, Atkinson et al. (2004) did not find a difference between iconic signs and non-​iconic signs in the linguistic background tests that they applied prior to the tasks targeting negation. Thus, they concluded that iconicity/​gesture does not play a role in differentiating effects of the particular side of lesion. This being noted, Atkinson et al. (2004: 226) still bring up the possibility that at a pragmatic level, it is possible that gesture interpretation may have been implicated. After all, it is possible to gesture that one does not want something, or that something is not the case, with no knowledge of a specific language. They continue: “If face negation has the status of gesture, its preservation in signers with language disorder following LH damage may follow”. Recently, Benitez et  al. (2016) showed in an innovative way that specific Action Units which are independently used in the facial expression of three emotions (anger, disgust, and contempt) are grammaticalized for the linguistic expression of negation in human language –​“human language” because, as they show, the marker is used in both speech and sign. This recent finding can shed some light on the possibility, suggested by Atkinson et al. (2004), that the LH-​lesioned signers may be using certain aspects of the facial expression of negation as commonly used in emotions; these signers may thus pragmatically make sense of the negative meaning although the full combinatorial/​linguistic 289

290

Kadir Gökgöz

effect that is achieved by combining specific action units might not be available to them. On the other hand, perhaps RH-​lesioned patients are not able to perceive the gestural pieces of the relevant emotions to start with and thus cannot put these together to get the combinatorial linguistic meaning of negation. Indeed, Atkinson et al. (2004) report in the results of the background part of their study that RH-​lesioned signers were impaired in the two (i.e., anger, disgust) of the three emotions (anger, disgust, contempt) whose Action Units, according to Benitez et al. (2016), contribute to the combinatorial use of linguistic expression of negation.

12.4  Conclusion In this chapter, I reviewed studies that offer theoretical and experimental perspectives on negation in sign languages. I  first showed that, overall, the syntactic position of the negative markers in sign languages follows the Final-​Over-​Final Constraint. The review of non-​manual markers of negation emphasized the methodological requirement to investigate non-​manual markers in a case-​by-​case manner in each sign language. Once this is accomplished, formal studies can help to make sense of the variation by applying precise formal tools, for instance, by assigning the relevant negative feature(s) to positions or elements in the phrase structure and by studying their interaction. Such endeavors add to our understanding of the expression of negation and has produced fruitful comparative research, as I outlined in the section on formal typologies. In the experimental part of this chapter, I discussed the findings of three studies. The first section reviewed how children acquire negation. We observed that children differentiate communicative and linguistic uses of the negative headshake and do not automatically translate the former into the latter; rather, they go through a 1-​to-​8 months delay before accurately and consistently using the linguistic headshake with each new manual negative marker that they acquire. Next, I  addressed another developmental topic, namely the development of negation in a homesign system. This part of the review attested to the universality of negation one more time, even in an impoverished context where a homesigner develops a language-​like system in the absence of a conventional language model to work with. We observed that negation is used in this homesign system as a structure building operation, although the meanings/​functions of negation seem to be based on broader social and cognitive development. Lastly, we reviewed a neurolinguistic study of negation in BSL which suggests that non-​manual marking of negation operates as a prosodic rather than a syntactic marker in this language. A recent computational study, which detected that specific Action Units are combined to grammaticalize negative face, adds to our understanding of the results from the neurolinguistic study in that Action Units that are shared between gesture and grammar may be playing an intriguing role, allowing the LH-​lesioned patients to make pragmatic sense of the negative face which in turn may explain why the RH-​lesioned patients did not perform as well under the non-​manual only condition for matching a negative sentence to a picture.

Acknowledgments I would like to thank the editor Roland Pfau for his patience, help, and plenty of useful advice and comments, and the reviewer, Marloes Oomen, for her accurate comments and 290

291

Negation

helpful suggestions. Their input has much improved this chapter. I am also grateful to Ronnie Wilbur for her endless support and encouragement and for inviting me to present the part on FOFC at the Workshop on Sign Language and Emergence of Language at Purdue University, West Lafayette, Indiana. I am also very grateful to Diane Lillo-​ Martin for her support and encouragement. Thanks to my MA student, Hande Sevgi, who read the draft of this work and provided valuable feedback. I am also grateful to Okan Kubus, Marina Milkovic, and Christian Stalzer for kindly providing TİD, HZJ, and ÖGS data. Last but not least, I want to thank my wife, Vincenza Iadevaia, for always being there for me. The shortcomings in the chapter are mine.

Notes 1 Neg-​S-​O-​V and Neg-​S-​V-​O are the other two word orders that are logically possible. To the best of my knowledge, these orders have not been reported (with a basic negative marker), except for the homesign study of Franklin et al. (2011), which I will address in Section 3.2. Therefore, I will not include these word orders in the syntactic discussions. Nonetheless, I should note that both of these word orders would be allowed by the FOFC (Biberauer et al. 2014). We will discuss the details of this constraint in relation to the other word orders. 2 In some of the sources, sign language examples are glossed in the surrounding spoken language and include an additional line with interlinear glossing in English. We present these examples as they appear in the original sources. 3 Similar to what we observed in S-​O-​V languages, other functional elements appear in a certain hierarchy with respect to Neg° in S-​V-​O languages. For instance, the TP is proposed to be higher than the NegP in ASL while an aspectual functional projection (AspP) is proposed to be lower than negation in this language. Evidence for this proposal comes from the following pieces of data (adapted from Neidle et al. 2000: 80, 84), where W I LL in (i) is assumed to occupy T° while F I N I SH in (ii) is proposed to occupy Asp°. In either case, the FOFC is observed, since the relevant syntactic heads are all in the same direction, i.e., on the left. (i)

J O H N WI L L NOT BUY HO USE

‘John will not buy a house.’ (ii)

J O H N N OT F INISH R E A D B O O K

‘John hasn’t read the book.’ 4 Another context where two negative elements are observed to co-​occur within a clause is what has been called double negation (DN). In contrast to NC, in DN languages, two negative elements that co-​occur within a clause do cancel each other out, and thus the sentence receives an affirmative meaning (e.g., “I didn’t see nothing” means “I saw something” in certain dialects of English). To the best of my knowledge, to date, no sign language has been reported to display DN, as also noted by Pfau (2016). 5 As the reviewer points out, van der Kooij et al. (2006) present similar results for NGT, also taking pragmatics into consideration. 6 According to Zeshan (2004, 2006a), a manual dominant sign language obligatorily requires a manual marker to negate a sentence, and the relevant non-​manual marker accompanies only the manual negation, whereas in a non-​manual dominant sign language, the use of a manual negative marker is not obligatory to negate a sentence, since a non-​manual negative marker is capable of negating a sentence on its own. Zeshan (2004, 2006a) further observes that in non-​manual dominant sign languages, the non-​manual marker is not restricted to the negative sign only but commonly spreads over other signs in a sentence. 7 Note that the relevant spreading non-​manual marker in TİD is not the backwards head-​tilt but a non-​neutral brow position, namely brow raising or lowering. 8 The reviewer draws my attention to the fact that Coerts (1992) found that the alignment of onset and offset of the negative headshake with manual signs is not as sharp in NGT –​in contrast to what Baker-​Shenk notes for ASL. 9 I am indebted to the reviewer for drawing my attention to this possibility for future research.

291

292

Kadir Gökgöz

References Anderson, Diane E. & Judy S. Reilly. 1997. The puzzle of negation: How children move from communicative to grammatical negation in ASL. Applied Psycholinguistics 18(4). 411–​429. Antzakas, Klimis. 2006. The use of negative head movements in Greek Sign Language. In Ulrike Zeshan (ed.), Interrogative and negative constructions in sign languages, 258–​269. Nijmegen: Ishara Press. Atkinson, Jo, Ruth Campbell, Jane Marshall, Alice Thacker, & Bencie Woll. 2004. Understanding ‘not’:  neuropsychological dissociations between hand and head markers of negation in BSL. Neuropsychologia 42(2). 214–​229. Baker-​Shenk, Charlotte. 1983. A microanalysis of the non-​manual components of questions in American Sign Language. Berkeley, CA: University of California PhD dissertation. Benitez-​Quiroz, C. Fabian, Ronnie B. Wilbur, & Aleix M. Martinez. 2016. The not face: A grammaticalization of facial expressions of emotion. Cognition 150. 77–​84. Biberauer, Theresa, Anders Holmberg, & Ian Roberts. 2014. A syntactic universal and its consequences. Linguistic Inquiry 45(2). 169–​225. Bloom, Lois. 1970. Language development:  Form and function in emerging grammars. Research monograph, no. 59. Cambridge, MA: MIT Press. Braze, David. 2004. Aspectual inflection, verb raising and object fronting in American Sign Language. Lingua 114(1). 29–​58. Cecchetto, Carlo, Alessandra Checchetto, Carlo Geraci, Mirko Santoro, & Sandro Zucchi. 2015. The syntax of predicate ellipsis in Italian Sign Language (LIS). Lingua 166. 214–​235. Cecchetto, Carlo, Carlo Geraci, & Sandro Zucchi. 2006. Strategies of relativization in Italian Sign Language. Natural Language & Linguistic Theory 24(4). 945–​975. Cecchetto, Carlo, Carlo Geraci, & Sandro Zucchi. 2009. Another way to mark syntactic dependencies: The case for right-​peripheral specifiers in sign languages. Language 85(2). 278–​320. Coerts, Jane. 1992. Nonmanual grammatical markers: An analysis of interrogatives, negations and topicalisations in Sign Language of the Netherlands. Amsterdam:  University of Amsterdam PhD dissertation. Corina, David P., Ursula Bellugi, & Judy S. Reilly. 1999. Neuropsychological studies of linguistic and affective facial expressions in deaf signers. Language and Speech 42. 307–​331. Crasborn, Onno, Els van der Kooij, Dafydd Waters, Bencie Woll, & Johanna Mesch. 2008. Frequency distribution and spreading behavior of different types of mouth actions in three sign languages. Sign Language & Linguistics 11(1). 45–​67. Emmorey, Karen. 2002. Language, cognition, and the brain. Insights from sign language research. Mahwah, NJ: Lawrence Erlbaum. Fischer, Susan A. 1975. Influences on word order change in American Sign Language. In Charles N. Li (ed.), Word order and word order change, 1–​25. Austin, TX: University of Texas Press. Franklin, Amy, Anastasia Giannakidou, & Susan Goldin-​Meadow. 2011. Negation, questions, and structure building in a homesign system. Cognition 118(3). 398–​416. Geraci, Carlo. 2005. Negation in LIS (Italian Sign Language). In Leah Bateman & Cherlon Ussery (eds.), Proceedings of the North East Linguistic Society (NELS 35), 217–​229. Amherst, MA: GLSA. Goodwin, Corina V. 2013. Negation across sign languages. Manuscript, University of Connecticut, Storrs. Gökgöz, Kadir. 2009. Topics in Turkish Sign Language (Türk İsaret Dili-​TİD) syntax:  Verb movement, negation and clausal architecture. Istanbul: Boğaziçi University MA thesis. Gökgöz, Kadir. 2011. Negation in Turkish Sign Language: The syntax of nonmanual markers. Sign Language & Linguistics 14(1). 49–​75. Gökgöz, Kadir. 2013. The nature of object marking in American Sign Language. West Lafayette, IN: Purdue University PhD dissertation. Gökgöz, Kadir & Engin Arık. 2011. Distributional and syntactic characteristics of nonmanual markers in Turkish Sign Language (Türk İsaret Dili, TİD). MIT Working Papers in Linguistics 62.  63–​78. Gökgöz, Kadir & Ronnie B. Wilbur. 2016. Olumsuz evet-​hayır sorularında olumlu önyargı: Türk İşaret Dili’nde olumsuzluk başından tümleyici başa taşımanın delili. In Engin Arık (ed.), Ellerle konuşmak, 253–​273. Istanbul: Koç Üniversitesi Yayınları.

292

293

Negation Hendriks, Bernadet. 2008. Jordanian Sign Language: Aspects of grammar from a cross-​linguistic perspective. Amsterdam: University of Amsterdam PhD dissertation. Hrastinski, Iva. 2010. Negative structures in Croatian Sign Language (HZJ). West Lafayette, IN: Purdue University MA thesis. Kegl, Judy A. & Howard Poizner. 1996. Crosslinguistic/​crossmodal syntactic consequence of left hemisphere damage: Evidence from an aphasic signer and his identical twin. Aphasiology 11.  1–​37. Kendon, Adam. 2002. Some uses of the headshake. Gesture 2(2). 147–​182. Liddell, Scott. K. 1980. American Sign Language syntax. The Hague: Mouton de Gruyter. Lillo-​Martin, Diane & Ronice Müller de Quadros. 2008. Focus constructions in American Sign Language and Língua de Sinais Brasileira. In Josep Quer (ed.), Signs of the time: Selected papers from TISLR 2004, 161–​176. Hamburg: Signum. Loew, Ruth, Judy A. Kegl, & Howard Poizner. 1997. Fractionation of the components of roleplay in a right-​lesioned signer. Aphasiology 11. 263–​281. Matsuoka, Kazumi. 1997. Verb raising in American Sign Language. Lingua 103(2). 127–​149. Morgan, Michael. 2006. Interrogatives and negatives in Japanese Sign Language (JSL). In Ulrike Zeshan (ed.), Interrogatives and negatives in signed languages, 91–​127. Nijmegen: Ishara Press. Napoli, Donna Jo & Rachel Sutton-​ Spence. 2014. Order of the major constituents in sign languages: Implications for all language. Frontiers in Psychology 5. 376. Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, & Robert G. Lee. 2000. The syntax of American Sign Language. Functional categories and hierarchical structure. Cambridge, MA: MIT Press. Neidle Carol & Robert G. Lee. 2005. The syntactic organization of American Sign Language: A synopsis. American Sign Language Linguistic Project, Report No. 12. Boston University. Nespor, Marina & Wendy Sandler. 1999. Prosody in Israeli Sign Language. Language and Speech 42. 143–​176. Newport, Elissa & Richard Meier. 1986. The acquisition of American Sign Language. In Dan I. Slobin (ed.), The cross-​linguistic study of language acquisition: Volume 1. The data, 881–​938. Hillsdale, NJ: Lawrence Erlbaum. Nunes, Jairo & Ronice Müller de Quadros. 2008. Phonetically realized traces in American Sign Language and Brazilian Sign Language. In Josep Quer (ed.), Signs of the time: Selected papers from TISLR 2004, 177–​190. Hamburg: Signum. Padden, Carol. 1983. Interaction of morphology and syntax in American Sign Language. University of California at San Diego PhD dissertation [Published 1988 by Garland Outstanding Dissertations in Linguistics, New York] Pea, Roy D. 1980. The development of negation in early child language. In David R. Olson (ed.), The social foundations of language and thought: Essays in honor of Jerome S. Bruner, 156–​186. New York: W.W. Norton. Petronio, Karen M. 1993. Clause structure in American Sign Language. University of Washington PhD dissertation. Pfau, Roland. 2008. The grammar of headshake:  A typological perspective on German Sign Language negation. Linguistics in Amsterdam 1(1). 37–​74. Pfau, Roland. 2016. A featural approach to sign language negation. In Pierre Larrivée & Chungmin Lee (eds.), Negation and polarity: Experimental perspectives, 45–​74. Dordrecht: Springer. Pfau, Roland & Josep Quer. 2002. V-​to-​Neg raising and negative concord in three sign languages. Rivista di Grammatica Generativa 27. 73–​86. Pfau, Roland & Josep Quer. 2007. On the syntax of negation and modals in Catalan Sign Language and German Sign Language. In Pamela M. Perniss, Roland Pfau, & Markus Steinbach (eds.), Visible variation: Cross-​linguistic studies on sign language structure, 129–​161. Berlin: Mouton de Gruyter. Pfau, Roland & Josep Quer. 2010. Nonmanuals: Their grammatical and prosodic roles. In Diane Brentari (ed.), Sign languages (Cambridge Language Surveys), 381–​402. Cambridge: Cambridge University Press. Quadros, Ronice Müller de. 1999. Phrase structure of Brazilian Sign Language. Porto Alegre: Pontificia Universidade Catolica do Rio Grande do Sul PhD dissertation.

293

294

Kadir Gökgöz Quadros, Ronice Müller de & Diane Lillo-​Martin. 2010. Clause structure. In Diane Brentari (ed.), Sign languages (Cambridge Language Surveys), 225–​251. Cambridge:  Cambridge University Press. Quer, Josep. 2012. Negation. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 316–​339. Berlin: De Gruyter Mouton. Sandler, Wendy. 1999. Prosody in two natural language modalities. Language and Speech 42. 127–​142. Sandler, Wendy & Diane Lillo-​Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press. Schalber, Katharina. 2006. What is the chin doing? An analysis of interrogatives in Austrian Sign Language. Sign Language & Linguistics 9(12). 133–​150. Tang, Gladys. 2006. Questions and negation in Hong Kong Sign Language. In Ulrike Zeshan (ed.), Interrogative and negative constructions in sign languages, 198–​224. Nijmegen: Ishara Press. van der Kooij, Els, Onno Crasborn & Wim Emmerik. 2006. Explaining prosodic body leans in Sign Language of the Netherlands: Pragmatics required. Journal of Pragmatics 38. 1598–​1614. Veinberg, Silvana C. & Ronnie B. Wilbur. 1990. A linguistic analysis of the negative headshake in American Sign Language. Sign Language Studies 68(1). 217–​244. Wilbur, Ronnie B. & Cynthia Patschke. 1999. Syntactic correlates of brow raise in ASL. Sign Language & Linguistics 2(1). 3–​41. Wood, Sandra K. 1999. Semantic and syntactic aspects of negation in ASL. West Lafayette, IN: Purdue University MA thesis. Zeijlstra, Hedde. 2004. Sentential negation and Negative Concord. Amsterdam:  University of Amsterdam PhD dissertation. Utrecht: LOT. Zeijlstra, Hedde. 2008. Negative concord is syntactic agreement. Manuscript, University of Amsterdam, http://​ling.auf.net/​lingBuzz/​000645. Zeshan, Ulrike. 2004. Hand, head, and face: Negative constructions in sign languages. Linguistic Typology 8. 1–​58. Zeshan, Ulrike. 2006a. Negative and interrogative constructions in sign languages: A case study in sign language typology. In Ulrike Zeshan (ed.), Interrogative and negative constructions in sign languages, 28–​68. Nijmegen: Ishara Press. Zeshan, Ulrike. 2006b. Negative and interrogative structures in Turkish Sign Language (TİD). In Ulrike Zeshan (ed.), Interrogative and negative constructions in sign languages, 128–​164. Nijmegen: Ishara Press.

294

295

13 NULL ARGUMENTS AND ELLIPSIS Theoretical perspectives Carlo Cecchetto

13.1  Introduction In natural languages, including sign languages, some categories do not need to be overtly expressed, although they are active both syntactically and semantically. For example, while some languages, including English, always require an overt subject in a finite clause, even when it does not provide a semantic contribution (1), other languages allow a null subject much more freely, as illustrated for Italian in (2) and (3). Chinese is even more extreme because both the subject and the object can remain unpronounced (cf. the answer in (4)). In this chapter, the symbol ‘Ø’ is used to indicate a category which is syntactically present but is phonologically null. (1)

* Ø rains.

(2)

Ø Piove. rains ‘It is raining.’

(Italian)

(3)

Ø Sta arrivando. is coming ‘(S)he is coming.’

(Italian)

(4)

Question:

Answer:

Zhangsan kanjian Lisi Zhangsan see Lisi ‘Did Zhangsan see Lisi?’ Ø kanjian Ø  le. he  see  he  LE ‘He saw him.’

295

le

ma?

LE 1

Q

(Chinese, Huang 1984: 433)

296

Carlo Cecchetto

Another example of missing categories involves constituents bigger than a single argument. For example, the VP ‘read War and Peace’ is unpronounced in the second clause in (5) because its content can be recovered under (near) identity with the VP in the first clause. (5)

John read War and Peace and Mary did Ø too.

In earlier work in the generative tradition, null arguments and VP ellipsis were considered separate phenomena. However, more recently, the approaches initially developed to analyze VP ellipsis have been applied to the analysis of null arguments, both in spoken and in sign languages. This development in the field is reflected in the organization of this chapter. In Section 13.1, I  report earlier work on null arguments in spoken and sign languages. Section 13.2 deals with VP ellipsis and more generally with ellipsis of constituents bigger than a single argument. Finally, in Section 13.3, I discuss the more recent VP ellipsis approach to null arguments. Section 13.4 is a short conclusion.

13.2  Earlier work on null arguments in sign languages 13.2.1  Null arguments in spoken languages In this section, I briefly summarize two types of analyses of null arguments that were initially proposed in the generative tradition for spoken languages (cf. Biberauer et al. (2010) for an overview). I do that because the literature on sign languages initially built on this debate. A pretty clear fact is that when an argument is null, there must be some way to identify its semantic contribution to the clause. Two distinct mechanisms have been identified. The first one is morpho-​syntactic and can be exemplified by those Romance languages (Italian, Romanian, Spanish, among others) that have a rich paradigm for subject agreement. For example, as illustrated in (6), in Italian, there is a one-​to-​one correspondence between the morphological termination on the finite verb and the person and number feature of the pronominal subject. (6)

(Io) mangio (Tu) mangi (Lui/​lei) mangia (Noi) mangiamo (Voi) mangiate (Loro) mangiano

I eat you eat he/​she eats we eat you eat they eat

(Italian)

Given this redundancy of information, the pronominal subject can remain null with no loss of relevant information. For convenience, I will call these null subjects ‘Italian-​type null categories’. Two technical implementations are possible for Italian-​type null categories. In one implementation (fully developed by Rizzi (1986)), the subject is a null pronoun (labeled pro) which sits in the canonical position for overt subjects. In principle, pro is identical to an overt pronoun, but for the fact that it is phonologically null, although in practice the fact that it is not pronounced creates some differences; for example, it cannot be stressed, and thus it is typically not a focus in the discourse. In

296

297

Null arguments and ellipsis

the second implementation, it is the agreement morpheme on the verb that acts as if it were a pronominal subject; therefore, no independent null pronominal category needs to be postulated (Alexiadou & Anagnostopoulou 1998). Two comments are relevant for the following discussion on sign languages. First, whatever implementation is chosen, there is a consensus in the literature that null subjects of the Italian-​type are pronominal in nature, either because a fully-​fledged pronoun (pro) is postulated, or because the morpheme on the verb acts as a pronoun. The second comment is that null objects are correctly predicted to be much more restricted in these languages, since the finite verb typically does not agree with the object in the relevant Romance varieties. However, the Italian-​type mechanism cannot be the only mechanism that allows null arguments. The reason is that there are other languages, including Chinese, Japanese, and Korean, that allow null arguments, although the verb does not carry any agreement morphemes. These languages are sometimes called ‘topic-​drop languages’. The reason is that once a topic has been introduced, it is possible not to express it in subsequent sentences, as it is understood to remain the same, until another topic is either explicitly or implicitly introduced. The basic generalization is that in topic-​drop languages, both subjects and objects can remain null as long as they are topics. Again, for convenience (and not because Chinese is the only language displaying it), I will call this type of null category ‘Chinese-​type null category’. As assumed in the seminal work by Huang (1984) and in much later work, null topics of the Chinese-​type are not pronominal expressions, but are variables bound by a (possibly null) preceding topic.2 This has an important consequence. Null subjects of the Italian-​type can occur in virtually any position in which an overt subject surfaces, since they are locally licensed by the agreement morpheme on the verb. However, Chinese-​type null arguments have a more restricted distribution because they must be bound by a suitable antecedent. Crucially, Huang assumes that the relation of binding between the antecedent topic and the null category may take long distance (as in (7), in which the null category bound by the topic ‘Zhangsan’ is the subject of the embedded clause) but is subject to familiar locality constraints. For example, the null object cannot be bound by the topic ‘Zhangsan’ in (8), arguably because the former is inside a relative clause (i.e., a syntactic island). (7)

Zhangsan, Wangwu shuo [Ø ihuilai] Zhangsan  Wangwu  say  will come ‘Zhangsan, Wangwu said he would come.’                       (Chinese, adapted from Huang & Yang 2013: 4)

(8)

* Zhangsan,  [wo  renshi  hen  duo [piping Ø de ren]] Zhangsan I know very many  criticize DE person ‘Zhangsan, I know many people that criticize (him).’                         (Chinese, adapted from Huang & Yang 2013: 4)

This division in two macro-​categories (Italian-​type and Chinese-​type) is a simplification. In fact, there are many languages that do not easily fit in either of these two groups (for example, languages like German and Dutch, which normally require an overt subject in a tensed clause, but admit null expletive subjects). Still, it inspired early work on null arguments in sign languages, to which we now turn.

297

298

Carlo Cecchetto

13.2.2  Lillo-​Martin (1986) on null arguments in American Sign Language The first work to go deep into the analysis of null arguments in sign languages is Lillo-​ Martin (1986), which focuses on American Sign Language (ASL). Her starting point is the distinction between agreement verbs and plain verbs (cf. Padden 1983; see also Quer, Chapter 5). Agreeing verbs can be spatially modified to mark their arguments. Typically, the path movement of an agreeing verb begins at the locus associated with the subject and terminates at the locus associated with the object (a locus is the point in the space where a noun phrase is articulated or, for example, if the noun phrase is articulated on the signer’s body, a locus that is established by index pointing or by eye gaze). Plain verbs cannot be spatially modified to agree with their arguments. This happens, for example, with body-​ anchored verb signs, since they cannot be detached from the body to move between loci associated with arguments. Lillo-​Martin’s initial observation is that null arguments are allowed with both plain and agreeing verbs in ASL. An example of null arguments with a plain verb is given in (9), which is an appropriate answer to the question ‘Did you eat my candy?’ (9)

ø EAT-​U P ø ‘Yes, I ate it up.’ YES,

(ASL, adapted from Lillo-​Martin 1986: 421)

An example of null arguments with an agreeing verb is (10), which is an appropriate answer to the question ‘Did John send Mary the paper?’. In (10), the subscript ‘a’ indicates the locus associated with the sender, and the subscript ‘b’ indicates the locus associated with the receiver. (10)

ø aS EN D b ø ‘Yes, he sent it to her.’ YE S,

(ASL, adapted from Lillo-​Martin 1986: 421)

However, Lillo-​Martin argues that the null categories in (9) and (10) are not the same. More specifically, she claims that the null categories occurring with agreeing verbs are to be assimilated to Italian-​type null arguments (but remember that Italian only allows for null subjects), while the null categories occurring with plain verbs are to be assimilated to Chinese-​type null arguments. We reproduce here one of her arguments. We have seen that in Chinese, the link between a topic and its variable is sensitive to syntactic islands. Mutatis mutandis, the same island sensitivity is observed with the null argument of a plain verb in ASL, but crucially not with the null argument of an agreeing verb. In (11), the embedded verb (SEND ) is agreeing, and the null category bound by the topic MOTHER can appear in an indirect question. However, the sentence becomes ungrammatical if the embedded verb is plain (cf. the verb LIK E in (12)). The counterpart of (12) with an overt pronoun (a-​I X ) instead of a null category is grammatical (13). In these examples, the continuous line over the sign M OTH ER indicates the specific non-​manual marking (raised eyebrows and slight backward head tilt) that co-​occurs with a manual sign when it is a topic.    

  topic

(11) a-​M OT H ER i, 1-​I X D ON ’T-​K N OW WH AT øi a-​S END-​1 ‘Motheri, I don’t know what (shei) sent me.’                         (ASL, adapted from Lillo-​Martin 1986: 425) 298

299

Null arguments and ellipsis

   

  topic

(12) * a-​M OT HER i, 1-​I X D ON ’T-​K N OW WH AT øi LIKE ‘Motheri, I don’t know what (shei) likes.’             (ASL, adapted from Lillo-​Martin 1986: 424)         topic

(13) a-​M OT HER i, 1-​I X D ON ’T-​K N OW WH AT a-​I X i LIKE ‘Motheri, I don’t know what shei likes.’  (ASL, adapted from Lillo-​Martin 1986: 424) The pattern in (11)–​(13) can be explained if (i) the null argument of an agreeing verb is a null pronominal, while the null argument of a plain verb is a variable that needs to be bound by its antecedent, and (ii) an embedded interrogative is an island that blocks binding (much like a relative clause is an island in (8) above). Lillo-​ Martin’s approach, although influential in the sign language literature, has been challenged on several grounds. First, Koulidobrova (2017) argues that the contrast between agreeing and plain verbs illustrated in (11) and (12) might be spurious. Koulidobrova observes that MOTH ER in these sentences is uttered in non-​neutral areas of space, as indicated by the index ‘a’ in the glosses. However, if MOTHER is uttered in a neutral location instead, the paradigm changes:  the asymmetry between verb types disappears because the null object becomes possible also with a plain verb, as shown in (14).       topic

(14)

ø LIKE ‘Motheri, I don’t know what (shei) likes.’           (ASL, adapted from Koulidobrova 2017: 402) M OT HER i, 1- ​I X D ON ’T-​K N OW WH AT

Second, as observed by Quer & Rosselló (2013), Lillo-​Martin herself claims that embedded clauses are always islands in ASL. If this was true, then all variables in embedded position (not only the ones found inside a syntactic island) should cause ungrammaticality. It follows that the ungrammaticality of (12) cannot be used as evidence supporting the extension of Huang’s approach to ASL. Another criticism to Lillo-​Martin was advanced by Neidle et al. (2000), as we show in the next section.

13.2.3  Neidle et al. (1996, 2000) on null arguments in American Sign Language Neidle et  al. (1996, 2000), largely based on data from Bahan’s (1996) dissertation, question Lillo-​Martin’s analysis. The starting point of their approach is the observation that agreement features in ASL have non-​manual correlates that should not be neglected in the analysis of null arguments. More specifically, a locus is typically established by index finger pointing or by articulating a sign in a specific position in the neutral space, but there are two main non-​manual means of pointing toward a locus:  the head can tilt toward its position, or the eyes can gaze to that location. Crucially, these two non-​ manual devices are possible both with plain and agreeing verbs. For example, in (15), the head tilts toward the location associated with the subject, BILL , and the eyes gaze to the location associated with the object, BOB (non-​manual expressions of subject and 299

300

Carlo Cecchetto

object agreement are extremely frequent with agreeing verbs, but not required according to Neidle et al. (2000)).     head tilti   eye gazej

(15)

B IL L IX i

iH IT j

BOB j

‘Bill (there) hit Bob.’               (ASL, adapted from Neidle et al. 1996: 17) As we already know, in sentences like (15), where there is an agreeing verb, both the subject and the object can remain null (on this point, Neidle et al. and Lillo-​Martin agree). Let us now turn to example (16), which involves the plain verb LOVE . Neidle et al. (1996) argue that in sentences like (16), there is overt manifestation of subject agreement (head tilt) and object agreement (eye gaze). Also, with these examples, non-​manual markings of agreement occur quite frequently.    

(16)

  head tilti   eye gazej

J OHN IX i LOVE MOTH ER j

‘John loves mother.’       

(ASL, adapted from Neidle et al. 1996: 20)

Therefore, according to Neidle et al., the difference between (15) and (16) is only morphological, not syntactic:  although in (15), agreement is expressed both manually and non-​manually, while in (16), it is expressed only non-​manually, the same functional structure is projected in the two types of sentences. This has consequences for the issue of null arguments: Neidle et al. assume that the null argument occurring with both plain and agreeing verb is the same category, namely pro. This category is licensed by the presence of morpho-​syntactic agreement features with both types of verbs. A prediction follows from this account, that is to say, while a null category should always be possible with an agreeing verb, a null category should only be allowed with a plain verb if non-​manual marking of agreement is present. Neidle et al. claim that this prediction is borne out by the contrast between (17) and (18). (17), which is ungrammatical, involves a plain verb but no expression of non-​manual agreement. However, the sentence becomes grammatical if non-​manual agreement marking is present, as in (18). (17) * ø LOV E MOTH ER ‘(He/​she) loves mother.’                   

(18)

    (ASL, adapted from Neidle et al. 1996: 21)

  head tilti     eye gazej

øi  LOVE MOTH ER j ‘(He/​she) loves mother.’         

(ASL, adapted from Neidle et al. 1996: 21)

Neidle’s et al. theory has been experimentally challenged by Thompson et al. (2006), who conducted a study using head-​mounted eye-​tracking to measure signers’ eye gaze. They involved ASL signers in a series of tasks: the signers had to tell a story illustrated by a series of pictures, then they were asked to retell it, and finally they were invited to make up a different story by using a list of ASL verbs, including both plain and agreeing verbs. Thompson et  al. found that while eye gaze accompanying agreeing verbs was indeed 300

301

Null arguments and ellipsis

most frequently directed toward the location of the object, eye gaze accompanying plain verbs was rarely directed toward the object. Furthermore, plain verbs occurring with null object pronouns were not marked by gaze toward the location of the object. They comment that this result is inconsistent with Neidle et al.’s description of the role of gaze (the eye-​tracking methodology does not allow to study the role of head tilt). For more details, see Hoseman, Chapter 6. Up to now we have considered two possible analyses of null arguments in sign languages, namely the view that they are ‘Italian-​type’ null pronominals and the view that they are ‘Chinese-​type’ variables bound by a topic. Although they are different, these approaches share the assumption that the null argument is a category intrinsically devoid of phonological content, either because it is pro or because it is a variable bound by a topic. We now temporarily abandon null arguments. We will come back to them after discussing VP ellipsis, since this discussion will provide the necessary background to analyze a further type of approach to null arguments.

13.3  VP ellipsis in sign languages The studies devoted to predicate ellipsis (including VP ellipsis) in sign languages are very few. When this chapter was written, only one paper was entirely devoted to this topic (Cecchetto et al. (2015) on Italian Sign Language (LIS)), although examples of predicate ellipsis in other sign languages were scattered across papers devoted to other topics. While this chapter goes to press, two important sources have enriched the literature, namely Zorzi (2018a, 2018b). An early mention of VP ellipsis is in Lillo-​Martin (1995), who makes the important observation that VP ellipsis allows the sloppy reading in ASL (I postpone discussing the issue of the sloppy reading to Section 13.4, where it will be central). Cecchetto et al. observe that in LIS, a predicate can go unuttered if a suitable antecedent is present. For example, this happens when the elliptical clause involves the adverbial sign SAME. In (19), the missing constituent in the elliptical clause corresponds to the VP MARIA LIKE (notice that LIS is SOV, which explains the word order in the antecedent clause in (19)). (19)

ø S AME ‘Gianni likes Maria. Piero does too.’ GIANNI M ARIA LIK E. PIERO

(LIS, Cecchetto et al. 2015: 221)

Ellipsis in sentences like (19), although suggestive of a VP ellipsis analysis, by itself is not conclusive evidence in favor of such an analysis because it might be a case of the phenomenon called stripping. Stripping and VP ellipsis in English are distinguished by the occurrence of an auxiliary, which is present in VP ellipsis (20) but is absent in stripping (21). (20) Gianni likes apples. Piero does too. (21) Gianni likes apples. Piero, too. Stripping and VP ellipsis are different in several other respects (cf. Lobeck 1995). For example, VP ellipsis can occur in subordinate clauses (22), while stripping cannot (23). (22) Gianni likes apples. I think Piero does too. (23) * Gianni likes apples. I think Piero, too. 301

302

Carlo Cecchetto

A further difference is that backward anaphora is possible with VP ellipsis (24), but not with stripping (25). (24) John didn’t, but Mary bought books. (25) * John too and Mary bought books. While VP ellipsis of the English type is rare cross-​linguistically (Goldberg 2005), stripping is more widespread and is typically attested also in languages that do not display VP ellipsis. Cecchetto et  al. (2015) present several arguments supporting the existence of genuine VP ellipsis in LIS. One argument is provided by sentences like (26b). (26) a.

ø S AME ‘Gianni will eat beans and Piero too.’ b. GIANN I BEAN EAT FU T. PIERO ø FUT SAME ‘Gianni will eat beans and Piero will too.’  (LIS, Cecchetto et al. 2015: 222) GIANN I BEAN EAT FU T. PIERO

While (26a) is likely to be a case of stripping, this analysis cannot be extended to (26b), since the future tense auxiliary (FU T ) is not omitted in the elliptical clause. Another argument in favor of the VP ellipsis analysis is illustrated by sentences like (27).  

(27)

    if

P IE RO NOT G IAN N I G O

‘If Piero does not, Gianni will go.’   

(LIS, Cecchetto et al. 2015: 224)

(27) has the two properties mentioned above that stripping does not have:  the ellipsis site occurs in a subordinate clause (a conditional clause marked by specific non-​manual markers, glossed as ‘if’), and backward anaphora is observed. Cecchetto et al. (2015) proceed to investigate whether data from LIS can offer evidence on two long-​standing issues in the literature on VP ellipsis. The first question is whether the missing VP is present in the syntactic component (although it is phonologically null), or whether it is supplied when the sentence needs to be interpreted (cf. Aelbrecht (2010) for a summary of this debate). In particular, two main families of explanations have been proposed: the phonological deletion approach and the semantic copying approach. According to the phonological deletion approach, a full-​fledged VP is present in syntax, although it is unpronounced. According to the semantic copying approach, a silent proform is generated in syntax and is interpreted in the semantic component as having the same meaning as the antecedent VP. Therefore, under the semantic copying approach, a full-​fledged VP is not present at any step of the syntactic derivation. A diagnostic that has been used to decide between these two approaches is the possibility to extract a wh-​ phrase out of an ellipsis site. Since the semantic copying approach assumes that the VP content is supplied as late as in the interpretative component, it is not expected that overt wh-​movement can take place out of the ellipsis site. However, this is possible in English (28) and in LIS as well (29).3 This suggests that VP ellipsis results from phonological deletion, at least in these languages.

302

303

Null arguments and ellipsis

(28) I know which person Mary talked to and which person Bill didn’t Ø.   wh

(29)



wh

ø FUT  WHO I-​K NOW NOT ‘I know who Gianni met in the past but I do not know who he will meet in the future.’                     (LIS, Cecchetto et al. 2015: 225) IN-​T HE-​PAS T G IAN N I MEET 

WH O I-​K N OW BUT

The second hot topic in the literature on VP ellipsis is the recoverability condition on the content of the ellipsis site (cf. Cecchetto & Percus (2006) for a summary of this debate).4 Clearly, this content must be recoverable from the antecedent. But the question is whether recoverability is due to semantic or morpho-​syntactic identity. Some authors claim that a category can go unuttered only if it has the same meaning as its linguistic antecedent (semantic identity). Others claim that a category can go unuttered only if it has the same form as its linguistic antecedent (morpho-​syntactic identity). Cecchetto et al. (2015) build on a sign language specific property to offer evidence in favor of the identity in form approach. The relevant property is the fact that in many sign languages, including LIS, adverbs can stand alone (as they normally do in spoken languages), or adverbial modification can be expressed by modifying the verb root. For example, the adverb QU I C K LY can be signed as a separate lexical item, as in (30), or can form a single lexical item with the verb it modifies, as in (31). When this happens the movement of the dominant hand towards the mouth of the signer, characteristic of the sign E AT , is repeated and is articulated more rapidly than in the citation form of the verb. Crucially, (30) and (31) may convey the same meaning, namely that Mario’s way of eating meat was quick. (30)

M ARIO M EAT EAT QU ICK LY

‘Mario eats meat quickly.’ (31)

(LIS, Cecchetto et al. 2015: 227)

M ARIO M EAT EAT-​Q U ICK LY

‘Mario eats meat quickly.’

(LIS, Cecchetto et al. 2015: 227)

Cecchetto et al. (2015) claim that if identity in meaning were enough to license ellipsis, sentences like (32) and (33) below should be on a par with each other, since the antecedent clause in (32) and (33) expresses the same meaning, despite the fact that (32) contains an independent sign for QU ICK LY , while in (33), adverbial modification takes place by modifying the verb root. However, while (32) is fully acceptable, (33) is sharply ungrammatical with the intended meaning, namely that Gianni eats meat slowly. (32)

ø S AME ‘Mario eats meat quickly. Gianni does that slowly.’ M ARIO M EAT EAT QU ICK LY. G IAN N I

SLOWLY

(33) *M ARIO M EAT EAT-​Q U ICK LY. G IAN N I ø SAME ‘Mario eats meat quickly. Gianni does that slowly.’

303

(LIS, Cecchetto et al. 2015: 227)

SLOWLY

(LIS, Cecchetto et al. 2015: 227)

304

Carlo Cecchetto

On the other hand, if identity in form is required, there is an easy explanation for why (33) is out:  since QU ICK LY forms a single lexical item with the verb in the antecedent clause, if the ellipsis site is identical in form to its antecedent, there is a clash in meaning (one cannot eat slowly and quickly at the same time). In (32), there is no clash because the ellipsis site is identical to the verb EAT alone.5 Moving to other sign languages, Jantunen (2013), a paper devoted to ellipsis and null arguments in Finnish Sign Language (FinSL), reports cases of gapping6 (34), sluicing7 (35), and cases like (36), which, in absence of a diagnostic to differentiate between these two analyses, are ambiguous between a VP ellipsis analysis and a stripping analysis. (34)

GIRL HAS -​G OT TWO-​P IECES. BOY

ø ONE-​P IECE ‘The girl has two and the boy (has) one.’ (FinSL, adapted from Jantunen 2013: 317)

(35)

M E ALREADY-​K N OW YOU BU Y+ALREADY APPLE  BUT WHY

(36)

B OY BUY APPLE G IRL



    wh

ø ‘I know that you bought an apple but (I don’t know) why (you bought it).’                   (FinSL, adapted from Jantunen 2013: 321) ø ALS O ‘The boy bought an apple and the girl (bought an apple) too.’                       (FinSL, adapted from Jantunen 2013: 321)

Cases of sluicing have been reported also in ASL (Koulidobrova 2017) and in LIS (Cecchetto et. al 2015).

13.4  The ellipsis analysis of null arguments It is perhaps not surprising that once a set of analytical categories was developed to account for VP ellipsis, scholars tried to apply the same apparatus to cases of argument ellipsis. In addition to the obvious appeal of accounting for two phenomena with the same machinery, an empirical observation motivated this switch of perspective. It is well known that the elliptical clause in (37) is ambiguous. Under the so-​called strict reading, the sentence means that Bill loves John’s mother, while under the so-​called sloppy reading, the sentence means that Bill loves Bill’s mother. (37) John loves his mother and Bill does too. However, the sloppy reading is not available in the non-​elliptical version of (37), which contains a pronoun (38). (38) John loves his mother and Bill loves her too. In sign languages as well, VP ellipsis licenses the sloppy reading, so example (39) can mean that Mary thinks that she has mumps, and (40) can mean that Piero values Piero’s secretary.

304

305

Null arguments and ellipsis

(39) a-​J OHN T H IN K a-​I X H AVE MU MPS b-​M ARY SAME ‘John thinks he has mumps and Mary does too.’ (ASL, Lillo-​Martin 1995: 168) (40)

GIANNI i SECRETARY POS S i VALU E. PIERO SAME

‘Gianni values his secretary and Piero does too.’

(LIS, Cecchetto et al. 2015: 229)

Interestingly, the overt/​null character of the pronoun does not seem to have an impact on the availability of the sloppy reading. As observed by Oku (1998), a pronominal null subject in Spanish does not license a sloppy reading either, that is, (41) cannot mean that Juan believes that Juan’s proposal will be accepted. The same happens in other Romance null subject languages, like Catalan (Quer & Rosselló 2013) and Italian (my judgment). (41) María cree que su propuesta será aceptada y Juan también cree que Ø será aceptada. ‘María believes that her proposal will be accepted, and Juan too believes that pro will be accepted.’ (Spanish, Oku 1998: 305) Therefore, the presence of the sloppy reading has been taken to be a diagnostic of the occurrence of VP ellipsis (although this claim has been challenged, see below). Crucially, a null subject in Japanese behaves differently from a null subject in Romance languages in this respect. The sloppy reading is possible in (42), as the sentence can mean that John thinks that John’s proposal will be accepted. (42) Mary-​wa   zibun-​no  ronbun-​ga   saiyo-​sare-​ru-​to    omottaeiru. Mary-​T OP  self-​ G EN    paper-​N OM  accept-​ PASS-​P RES-​C OMP  think. John-​mo  Ø  saiyo-​sare-​ru-​to   omotteiru. John-​also   accept-​PAS S -​P RES -​C OMP  think. ‘Mary thinks that her paper will be accepted. John also thinks that it will be accepted.’                   (Japanese, Oku 1998: 305) The contrast between (41) and (42) is very telling because there is a consensus that null subjects of the Romance type are pronominal in nature (cf. Section 13.2). The fact that null arguments in Japanese behave differently, suggests that they are not. More specifically, based on this type of evidence, it has been proposed that null arguments in Japanese result from the same mechanism which is responsible for VP ellipsis.8 The ellipsis analysis for null arguments has been extended to ASL by Koulidobrova (2017). One of her arguments is the fact that also in ASL, a null argument can be interpreted sloppily. In particular, the second part of (43) can mean either that Jeff hates Peter’s students or that Jeff hates Jeff’s students. (43) A: a-​P E T ER LIK E a-​P OS S S TU D EN T . ‘Peter likes his students.’ B: b-​J E F F H ATE Ø ‘Jeff hates ({Peter’s/​Jeff’s} students).’        (ASL, Koulidobrova 2017: 414) As long as the sloppy reading diagnostic is a reliable indicator of ellipsis, the pattern in (43) indeed suggests an ellipsis analysis for null arguments in ASL. Koulidobrova 305

306

Carlo Cecchetto

develops the ellipsis idea by proposing that what undergoes ellipsis in ASL is not the entire nominal constituent (DP in technical terminology) but only the bare noun, which, however, she assumes to be argumental in ASL. This analysis is suggested by examples like (44). Koulidobrova reports that (44) is three-​ways ambiguous. It can mean that the same three students who joined the class dropped it, as expected if the entire DP T H R E E S T U D E N T is copied in the second clause. But it can also mean that different three students dropped the class, or that some other students (whose number is unspecified) dropped it. These readings are not immediately compatible with the hypothesis that the quantifier T H R E E is copied in the gap position in the second clause, and for this reason, Koulidobrova assumes that only the noun S T U D E N T is copied (incidentally, this requires a revision of some traditional assumptions about ASL, including rejecting the analysis of the pointing sign as a definite determiner, for which Koulidobrova cites independent evidence). (44) A:

T HREE S TU D EN T JOIN 1-​P OS S CLASS

‘Three students joined my class.’ B: ø DROP 1- ​P OS S CLAS S ‘({The same three/​different three} students) dropped my class.’                     (ASL, Koulidobrova 2017: 414) It must be said that the ellipsis analysis of null arguments has been challenged, too. In particular, Quer & Rosselló (2013) argue that the presence of the sloppy reading is not a reliable indication that we are dealing with ellipsis rather than with a pronominal null category. They give several arguments for their claim, including the fact that a null subject in Romance (therefore, a pronominal category) is not always incompatible with the sloppy reading. This is shown by a sentence like (45), which can mean that Pere says that Pere studies French, and Joan says that Joan studies French, as well.9 (45) En 

Pere  diu  que  Ø estudia francès  i  en  Joan  també diu Pere  says  that  studies  French  and  DET   Joan  also  says que  Ø estudia francés. that   studies French ‘Perei says that hei studies French, and Joanj also says that hei/​j studies French.’                           (Catalan, Quer & Rosselló 2013: 354) DET  

If Quer & Rosselló are right, one of the main motivations for the ellipsis analysis of null arguments is jeopardized.

13.5  Conclusion It should be apparent that, although the study of ellipsis in sign language is relatively recent and the number of works addressing the topic is not huge, the intricacies and the interpretative riddles that make this topic difficult and at the same time fascinating arise in sign languages not less than in spoken languages. Much work needs to be done, though. For example, it is probably fair to say that there is no in-​depth analysis of both null argument and predicate ellipsis for any sign language available to date. This makes the current proposals largely dependent (probably too dependent) on the debate started by spoken language data. While there is nothing wrong in using analytical categories 306

307

Null arguments and ellipsis

developed for spoken languages in the study of sign languages, under the assumption that they are both expression of the biological language faculty, one would expect these categories to be enriched, modified, or even discarded by the consideration of sign language data. This happened only partially so far, and this is another reason for deepening the investigation of ellipsis phenomena in sign languages.

Notes 1 L E is a perfective or inchoative aspect marker. 2 For reasons of space, I simplify somewhat my presentation of Huang’s account. In his account, while a null object is always a variable bound by a topic, a null subject can either be a pro or a variable. 3 In order to grasp the word order in (29), one should keep in mind that in LIS, wh-​movement targets the right periphery of the clause (cf. Cecchetto et al. 2009). Furthermore, LIS being SOV, the embedded question precedes the main verb I-​K NOW in (29). 4 Sometimes the issue regarding the choice between the phonological deletion and the semantic copying approach and the issue concerning recoverability are not distinguished, but it would be a mistake to identify them with each other. For example, one can support the semantic copying approach, but still maintain that copying takes place only under morpho-​syntactic identity with a suitable antecedent. 5 Interestingly, there seems to be some variation across sign languages here. Koulidobrova (2017) applies to ASL the diagnostics based on ‘adverb incorporation’ by using the ASL adverbs FA S T/​ SLOW . She confirms that the sentence where the adverb is ‘incorporated’ in the verb root (i.e., the counterpart of (31)) is ungrammatical. This is similar to the LIS pattern. However, she reports that also the version where the adverb is an independent sign (i.e., the counterpart of (30)) is ungrammatical in ASL. This introduces an interesting cross-​linguistic difference. It is possible that LIS allows ellipsis of the VP to the exclusion of the VP-​peripheral adverb while ASL does not. 6 Gapping is the ellipsis phenomenon under which the verb in the second conjunct can be elided under conditions of identity with the verb in the first conjunct. The following is an example from English. (i) John eats an apple and Mary Ø a candy. 7 Sluicing is the ellipsis phenomenon under which everything except the wh-​expression is elided from an interrogative clause. The following is an example from English. (i) Someone bought a present, but I do not know who Ø. 8 Saito (2007) favors the semantic copying mechanism, as he assumes that the position of the null argument is not projected in syntax and is given a content in the semantic component. The reason for assuming this is that Japanese behaves differently from English and LIS when the diagnostic based on the extraction from the ellipsis site is applied. While extraction is possible in English and LIS (cf. (28) and (29) in the text), scrambling out of an elliptical category is apparently not possible in Japanese (cf. Saito (2007) for the data and for a more accurate description). Incidentally, notice that in principle, the difference between English and LIS on the one side and Japanese on the other side might also be due to the type of extraction involved (wh-​movement and scrambling, respectively). 9 The difference between (41) and (45) is that in (41), the null subject contains the pronominal expression (‘his/​her’) which does not admit the sloppy reading, while in (45), the entire subject is the pronominal expression that can be interpreted sloppily.

References Aelbrecht, Lobke. 2010. The syntactic licensing of ellipsis. Amsterdam: John Benjamins. Alexiadou, Artemis & Elena Anagnostopoulou. 1998. Parametrizing AGR:  Word order, V-​movement and EPP-​checking. Natural Language and Linguistic Theory 16. 491–​539.

307

308

Carlo Cecchetto Bahan, Benjamin. 1996. Non-​ manual realization of agreement in American Sign Language. Boston: Boston University PhD dissertation. Biberauer, Theresa, Andres Holmberg,  Ian Roberts, & Michelle Sheehan (eds.). 2010. Parametric variation: Null subjects in minimalist theory. Cambridge: Cambridge University Press. Cecchetto, Carlo, Alessandra Checchetto, Carlo Geraci, Mirko Santoro, & Sandro Zucchi. 2015. The syntax of predicate ellipsis in Italian Sign Language (LIS). Lingua 166. 214–​235. Cecchetto, Carlo, Carlo Geraci, & Sandro Zucchi. 2009. Another way to mark syntactic dependencies. The case for right peripheral specifiers in sign languages. Language 85. 278–​320. Cecchetto, Carlo & Orin Percus. 2006. When we do that and when we don’t: A contrastive analysis of VP ellipsis and VP anaphora. In Mara Frascarelli (ed.), Phases of interpretation, 67–​100. Berlin: Mouton de Gruyter. Goldberg, Lotus Madelyn. 2005. Verb-​stranding VP ellipsis:  A cross-​linguistic study. Montréal: McGill University PhD dissertation. Huang, James. 1984. On the distribution and reference of empty pronouns. Linguistic Inquiry 15. 531–​574. Huang, James & Barry Yang. 2013. Topic-​drop and MCP. Paper presented at the 87th Annual Meeting of the Linguistic Society of America. Boston, MA. Available at: scholar.harvard.edu/​ files/​ctjhuang/​files/​2013.0105_​lsa_​mcp_​huangyang.pdf (accessed 16 August  2016). Jantunen, Tommi. 2013. Ellipsis in Finnish Sign Language. Nordic Journal of Linguistics 36. 303–​332. Koulidobrova, Elena. 2017. Elide me bare: Null arguments in American Sign Language. Natural Language and Linguistic Theory 35. 397–​446. Lillo-​Martin, Diane. 1986. Two kinds of null arguments in American Sign Language. Natural Language and Linguistic Theory 4. 415–​444. Lillo-​Martin, Diane. 1995. The point of view predicate in American Sign Language. In Karen Emmorey & Judy Reilly (eds.), Language, gesture, and space, 155–​170. Hillsdale, NJ: Lawrence Erlbaum. Lobeck, Anne. 1995. Ellipsis:  Functional heads, licensing and identification. Oxford:  Oxford University Press. Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, & Robert Lee. 2000. The syntax of American Sign Language. Functional categories and hierarchical structure. Cambridge, MA: MIT Press. Neidle, Carol, Dawn MacLaughlin, Benjamin Bahan, & Judy Kegl. 1996. Non-​manual correlates of syntactic agreement in American Sign Language. ASL Linguistic Research Program Report No. 2. Available at: www.researchgate.net/​publication/​239932188 (accessed 10 March 2020). Oku, Satoshi. 1998. LF copy analysis of Japanese null arguments. In Catherine Gruber, Derrick Higgins, Kenneth S. Olson, & Tamra Wysocki (eds.), Chicago Linguistic Society (Vol. 3), 299–​314. Chicago: Chicago Linguistic Society. Padden, Carol. 1983. Interaction of morphology and syntax in American Sign Language. San Diego: University of California PhD dissertation. Quer, Josep & Joana Rosselló. 2013. On sloppy readings, ellipsis and pronouns: Missing arguments in Catalan Sign Language (LSC) and other argument-​drop languages. In Victoria Camacho-​ Taboada, Ángel Jiménez-​Fernández, Javier Martín-​González, & Mariano Reyes-​Tejedor (eds.), Information structure and agreement, 337–​370. Amsterdam: John Benjamins. Rizzi, Luigi. 1986. Null objects in Italian and the theory of pro. Linguistic Inquiry 17. 501–​555. Saito, Mamoru. 2007. Notes on East Asian argument ellipsis. Language Research 43. 203–​227. Thompson Robin, Karen Emmorey, & Robert Kluender. 2006. The relationship between eye-​gaze and verb agreement in American Sign Language: An eye-​tracking study. Natural Language and Linguistic Theory 24. 571–​604. Zorzi, Giorgia. 2018a. Coordination and gapping in Catalan Sign Language. Barcelona: Universitat Pompeu Fabra PhD dissertation. Zorzi, Giorgia. 2018b. Gapping vs VP-​ellipsis in Catalan Sign Language. Formal and Experimental Advances in Sign Language Theory (FEAST) 1. 70–​81.

308

309

14 NULL ARGUMENTS Experimental perspectives Diane Lillo-​Martin

14.1  Introduction Numerous studies experimentally investigate properties of null arguments in sign languages. In this chapter, such studies are divided into two sections:  psycholinguistic studies with adults, and acquisition studies (with both children and adults). Before diving into these sections, a review of the nature of null arguments in sign languages is needed, and a summary of the research questions addressed by existing experimental studies is presented. In Chapter 13, Cecchetto presented basic facts and theoretical questions raised about the nature of null arguments in sign languages. Here, those aspects of null arguments that are relevant for understanding the experimental studies summarized below are presented. In fact, first a brief review of overt pronouns is given, as this will be important for several of the experimental studies. In sign languages (all that we know of), pointing signs (also known as ‘index’ signs, sometimes glossed as I N D E X or I X ) assume the function of overt pronouns (see Cormier (2012) for an overview). A signer directs a point at themself for first-​person reference; a point directed at the addressee is interpreted as ‘you’, and a point directed at a non-​addressed referent is interpreted as ‘she/​he/​it’.1 For referents not in the immediate physical context, a location in space is designated as the location toward which the pointing sign should be directed, known as a locus or R-​locus (for referential locus; Lillo-​Martin & Klima 1990). This designation may take place by initially placing the sign naming the referent in the spatial location, by gazing at the location, or by using verbal morphology employing that location, among others. This process is sometimes known as locus establishment. Sign languages (all established Deaf community sign languages that we know of) also make use of the loci just described in the modification of a certain subset of verbs. This process is often described as verb agreement, though the precise analysis is a matter of some debate (see Quer, Chapter 5, and Hosemann, Chapter 6; also Lillo-​Martin & Meier 2011; Mathur & Rathmann 2012). Despite the potential that agreement is not the correct analysis of this phenomenon, the term will be used here. Verbs marked for agreement are spatially produced so that they move and/​or are oriented with respect to loci. For 309

310

Diane Lillo-Martin

transitive verbs prototypically having human arguments (Meir 2002), agreement usually involves moving the sign from the locus associated with the subject toward the locus associated with the object. Verbs may also be localized to indicate spatial information such as source and goal or location of an event; in this case they are frequently referred to as spatial verbs employing spatial or locative agreement. Verbs that are not spatially modified are commonly called plain verbs (Padden 1983). A description of verb agreement is important in the context of null arguments because it has been argued that (at least) some null arguments in American Sign Language (ASL) are syntactically licensed by agreement (see review in Cecchetto, Chapter 13; Lillo-​Martin 1991; Bahan et al. 2000). In particular, it is descriptively clear that arguments of verbs marked with agreement may be null, and usually are. In such cases, the referent of the null argument (or its identification) is determined by the agreement. Null arguments are not only found in contexts of agreeing verbs. On one analysis, they are licensed by a (potentially null) discourse topic, similar to null arguments in languages without verb agreement such as Chinese and Japanese. As described in Cecchetto, Chapter 13, an alternative analysis looks at all null arguments as instances of argument ellipsis (Koulidobrova 2012, 2017a). A number of studies have sought to characterize the properties of overt and null arguments in languages employing them. There seems to be some variability in the acceptability of an overt pronoun in contexts where a null argument might be employed. One approach to such questions is to calculate which kinds of nominal elements are used for reference tracking in different discourse contexts, including introduction of a new referent, maintenance of a previously introduced referent, and re-​introduction of a referent after a reference shift. Typically, introduction of a referent requires the use of an overt noun phrase, while languages use overt pronouns or, if available, null pronouns, for maintenance of an already introduced referent. When a referent has been introduced but then not used in a clause, it needs to be re-​introduced, and typically re-​introduction would use a definite noun phrase or pronoun (e.g., Hickmann & Hendriks 1999). A few studies have examined these reference tracking preferences in sign languages. Perniss & Özyürek (2015) looked at this issue for (adult native) signers of German Sign Language (DGS) (in comparison to gestures produced by non-​signers). Adult deaf native signers were asked to narrate a brief video vignette. As expected, they found that overt elements were much more frequent in re-​introduction contexts (about 65% of re-​introductions were overt) compared to maintenance contexts (not much more than 10%) in DGS. Frederiksen & Mayberry (2016) obtained different results in a study of native signers of ASL. In their study, deaf native signers told stories from short picture books and videos. Unsurprisingly, they found that signers used overt nouns nearly all the time for introductions. However, in both maintenance and re-​introduction contexts about 70% of the time the authors found use of ‘zero anaphora’, which includes verbs marked with agreement and plain verbs. The difference between these contexts came with respect to the forms used instead of zero anaphora. In the maintenance context, classifiers were used; these can be considered to be similar to agreeing verbs in that they are predicates that include information about their arguments (see Tang et al., Chapter 7, and Zwitserlood, Chapter 8; Benedicto & Brentari 2004; Zwitserlood 2012). On the other hand, overt nouns were used in virtually all of the re-​introduction contexts that did not have zero anaphora. It can be concluded, then, that both sign languages are similar to spoken languages that allow null arguments, in that they strongly prefer null arguments for contexts of maintenance. 310

311

Null arguments: experimental perspectives

As summarized in Cecchetto, Chapter 13, a major theoretical issue in analyzing null arguments in sign languages has been their derivation: are they null pronominals, variables from movement of a null element, or the consequence of ellipsis? While these questions remain important for theoretical analyses, they have not been directly addressed in experimental studies. Instead, the experimental studies have focused on the following questions. First, do fluent native signers process null arguments online in a way similar to the processing of overt pronouns? If so, this may provide a hint as to the nature of the null arguments, though it does not definitively address their derivation. Second, studies of the acquisition of null arguments have asked how children come to know the grammatical requirements for their use, both in terms of sentence-​level licensing and with respect to the discourse-​level preference patterns observed in fluent signers. Existing studies have extended the latter class of questions beyond native signers to look at both child bimodal bilinguals and adult second-​language learners.

14.2  Psycholinguistic studies with adults A series of studies by Emmorey and colleagues has investigated the processing of sentences with overt pronouns or null arguments. These studies had as their foundation numerous works on processing English that indicated very quick, online access to the antecedent of a pronoun, known as antecedent or referent activation (see Fodor (1993) and Nicol & Swinney (2002) for overviews). In general, when a listener hears a pronoun in an English sentence, the antecedent of that pronoun is mentally activated. On the other hand, other noun phrases that are not the antecedent of the pronoun (non-​antecedents) are suppressed or inhibited. Furthermore, the less ambiguous the pronoun is, the more quickly non-​antecedent suppression is seen. Moreover, although the contexts in which English permits arguments to be null are limited, antecedent activation of null arguments has also been observed. Emmorey’s studies applied one of the methodologies used to examine referent activation in spoken languages to the case of ASL, with appropriate modifications for use with fully visual stimuli. The probe-​recognition task used in spoken languages employs either visually-​presented written words or auditorily-​presented speech, involving a sentence containing a pronoun. At some point, either the end of the sentence or some number of milliseconds after the pronoun, a probe word is presented. The participant’s task is to decide whether the probe word appeared earlier in the sentence. Their response time to a probe following a coreferential pronoun is compared to response time in other sentences, where the probe word appeared but no pronoun was used. An example is given in (1), from MacDonald (1986, cited by Emmorey et al. 1991). In (1a), the pronoun ‘she’ takes ‘Meg’ as its antecedent, and participants respond faster to the probe word ‘Meg’ after being presented with this sentence than they do with (1b), where there is no pronoun to activate the antecedent. (1) a. Meg worked puzzles with Burt on the porch, but she got tired very easily. b. Meg worked puzzles with Burt on the porch, but many of the pieces were missing.             (MacDonald 1986, cited by Emmorey et al. 1991) In the study by Emmorey et  al. (1991), sentences such as those in example (2a) were visually presented entirely in ASL to participants. A  probe (signed) word was presented either immediately following the overt pronoun or after a 1000 ms delay; 311

312

Diane Lillo-Martin

the probe was identified by the fact that the recorded signer used different colored clothing while producing the probe sign compared to the test sentence. Following the probe, the participants did not see the rest of the sentence. In the no-​pronoun condition (2b), there is no pronoun in the sentence, so the probe was presented immediately or 1000 ms after the eleventh sign. In the notation, the subscript ‘a’ or ‘b’ indicates spatial locations used in the production of these signs; I N D E X is the gloss used for the pronominal sign. (2) a.

ONE-​Y EAR-​AGO HIGH S-​F JUDGE a DECIDE PUT-​D OWN PRISON-​AGENT b LIFE JAIL, SUDDEN LY IN D EX b H EART ATTACK DIE .

‘A year ago, a high court judge from San Francisco decided to sentence a prisoner to life in jail, but unexpectedly he(prisoner) had a heart attack and died.’ Probes: PRIS ON -​A G EN T (referent), JUDGE (non-​referent) b.

ONE-​Y EAR-​AGO HIGH S-​F JUDGE a DECIDE PUT-​D OWN PRISON-​AGENT b LIFE JAIL, SUDDEN LY LAW-​A G EN T FIN D N EW EVIDENCE .

‘A year ago, a high court judge from San Francisco decided to sentence a prisoner to life in jail, but unexpectedly a lawyer found some new evidence.’ Probes: PRIS ON -​A G EN T, JU D G E The results of this study indicated that participants did indeed activate the antecedent of the pronouns, when the probe was presented at the 1000 ms delay, since their reaction time was faster to the antecedents than to the non-​antecedents, and furthermore antecedents were responded to more quickly in the pronoun condition than in the no-​ pronoun condition. In sum, the study by Emmorey et al. (1991) indicated that a similar pronoun processing mechanism applies both for signers and for speakers, although there are differences that will be discussed below. Emmorey & Lillo-​Martin (1995) extended the results of Emmorey’s study with overt pronouns to the case of null arguments, asking whether participants would show a similar activation of antecedents in sentences with null arguments as they did in the earlier overt pronoun study. For this experiment, sentences containing overt pronouns were compared to sentences with null arguments of agreeing verbs; for a control (baseline) condition, no-​ anaphora sentences were used. Examples of each condition are given in (3). (3) a.

Overt pronoun         rh-​q        t F UNN Y, K N OW-​T H AT   MY CAT IN DEX a SNOBBY [“reserved and snooty”],    

  t

M Y D OG IN D EX b BOIL. IN D EX b FED -​U -​P .

b.

‘It’s funny –​you know that my cat is reserved and snooty, and my dog is boiling mad. He (the dog) is fed-​up.’ Probes: D OG (referent), CAT (non-​referent) Null pronoun        



rh-​q 

F UNN Y, K N OW-​T H AT  

   

        t MY CAT IN DEX a SNOBBY [“reserved

  t

M Y D OG IN D EX b BOIL. bH ATE a WOW .

312

and snooty”],

313

Null arguments: experimental perspectives

c.

‘It’s funny –​you know that my cat is reserved and snooty, and my dog is boiling mad. (the dog) really hates (the cat), wow!’ Probes: D OG (subject referent), CAT (object referent) Control      



  rh-​q 

F UNNY, K N OW-​T H AT  

   

  t     

       

   

  t

MY CAT IN D EX a SNOBBY [“reserved

and snooty”],

  t

M Y DOG IN D EX b BOIL. IN D EX 1st S TEP-​O UT . ‘It’s funny –​you know that my cat is reserved and snooty, and my dog is boiling mad. As for me, I want nothing to do with it.’ Probes: D OG, CAT

The results of the study by Emmorey and Lillo-​Martin (Experiment 2) can be interpreted as showing that overt and null pronouns similarly activate their antecedents. Reaction times to the antecedents in both the overt and null pronoun conditions were faster than reaction times to the same probe signs in the control, no-​anaphora condition. This would indicate that when an overt pronoun or a null argument is presented, signers activate its antecedent. If null arguments of agreeing verbs are pronominal, as in the analysis offered by Lillo-​Martin (1986), this parallel performance of overt and null arguments would be expected. However, there is a way in which the results of the studies by both Emmorey et al. (1991) and Emmorey & Lillo-​Martin (1995) differ from those typically observed with English. In a number of studies of the processing of English, non-​referent suppression is found alongside of referent activation. That is, not only is the processing time for the antecedent faster, but accessing the non-​antecedent is slower in the context of a pronoun picking out a different referent compared to the no-​pronoun condition. In contrast, this inhibition effect was not found for the non-​referent of overt pronouns by either Emmorey et al. (1991) or Emmorey & Lillo-​Martin (1995) (note that the null-​argument condition does not bear on this issue, since the probes consisted of the referents of the null subject and the null object of the test clause, so there is no non-​referent probe in these items). To address this issue, Emmorey (1997) conducted another experiment looking at referent activation using a different type of comparison condition, one in which probes were presented either before or after an anaphoric element. For non-​antecedent probes, response time was slower when the probe was presented after the anaphoric element than when the probe was presented before the anaphoric element –​that is, the non-​antecedents were inhibited after a pronoun was presented that should induce the participant to recall the actual antecedent. However, although non-​antecedent suppression was found, the different baseline used in the study by Emmorey (1997) resulted in no evidence of antecedent activation, unlike the previously reviewed studies. Emmorey suggests that this is due to the normal decline in activation that occurs when a noun phrase is encountered; in the before-​pronoun condition, this decline has not depressed the activation level much, so even if the pronoun reactivates the antecedent, it only goes back to the same level of activation it had earlier in the sentence. There is one more important theoretical issue addressed by these studies to be discussed. In Emmorey et al. (1991) a second experiment was conducted in which the spatial locus used by the probe either matched or did not match its original position within the introducing sentence. Participants were told to make their judgment on the basis of 313

314

Diane Lillo-Martin

the lexical content, not the spatial locus, and in fact they seemed to be able to do that, as there was no interference effect for the mis-​matching spatial location. Interestingly, another study, by Emmorey et  al. (1995), did find interference for mis-​matching spatial locations, but only for cases in which the spatial loci indicated topographical spatial information, not when they were purely referential. Sign languages frequently make use of the space in front of a signer to indicate physical spatial relationships between elements, which can be considered a topographical use of space (see also Perniss, Chapter 17, on use of space). What on the surface appears to be very similar pointing to or indicating spatial locations thus has different functions and arguably different linguistic analyses. On the basis of the set of findings discussed so far, Emmorey (1997, 2002) concludes that the relationship between referential pronouns and their spatial loci is semantically light and is not kept in memory, while that between locative pronouns and locations is kept in memory. This is consistent with theoretical positions that differentiate these two types of elements (see also Hosemann, Chapter 6, on verb agreement). Besides the probe-​recognition technique, another methodology has been used to study null and overt pronouns experimentally: tracking eye gaze during language production (also mentioned in Cecchetto, Chapter 13). Thompson et al. (2006) used a head-​mounted eye-​tracking system to investigate signers’ use of eye gaze while they produced sentences containing verbs with overt or null arguments. This study tested the proposal by Neidle et al. (2000) that eye gaze marks object agreement and licenses null objects whether or not the manual verb shows agreement, in contrast to the proposal by Lillo-​Martin (1986) that null objects of verbs without manual agreement require a topic for licensing. What Thompson et  al. found was that eye gaze to the object locus was used with verbs marked with manual agreement, but rarely did eye gaze indicate the object locus with plain verbs; rather, gaze was to addressee or other during the production of plain verbs. Their data furthermore contained no examples of null objects with plain verbs without an overt topic. Further discussion of the study by Thompson et  al. can be found in Hosemann, Chapter  6, where its implications to theories of verb agreement are discussed. The experimental studies of null arguments discussed in this section contribute toward theoretical questions in the following ways. First, they indicate that null arguments used with agreeing verbs are processed similarly to overt pronouns, at least by the 1000 ms time point at which the differential activation of probes was found. However, this cannot be taken as definitive evidence that their linguistic status should be pronominal, as opposed to other possible analyses (e.g., the argument ellipsis analysis reviewed in Cecchetto, Chapter 13) for several reasons. Importantly, various types of psycholinguistic experiments have indicated that antecedent reactivation is found for null elements of different types, but to differing degrees based on the method employed (see Fodor 1993). On the basis of these findings, Fodor (1989, 1993) proposed that the probe-​recognition task taps a semantic level of representation rather than a syntactic one. If the study by Emmorey & Lillo-​Martin (1995) similarly tapped a semantic level, it can be concluded that the signers did activate the antecedent of the null argument, but not that the null argument has a specifically pronominal representation in the syntax. Another relevant comparison comes from studies of null-​argument processing in languages like Japanese and Korean, where varying linguistic analyses have been proposed. Such studies have employed different methodologies from the one used in Emmorey’s studies, so conclusions must be tentative. While some differences between processing of overt and null arguments have been found, it seems that in both cases referent activation can be detected, (e.g., Kwon & Sturt 2013). 314

315

Null arguments: experimental perspectives

Therefore, caution is advised in drawing conclusions from the finding of referent activation for null arguments in ASL to their syntactic nature. The experimental findings reviewed here also bear on questions regarding the nature of the spatial loci that overt pronouns and agreeing verbs are associated with. As summarized above, Emmorey et al. (1995) found interference in referent activation for mis-​matching loci only for verbs indicating spatial information, not purely referential. In an interesting follow-​up, Emmorey & Falgier (2004) compared activation for referent and location information in sentences where the same locus is used for both. They found significant activation for the referent only, not for the location associated with the referent. They concluded that pronouns only activate their antecedent noun phrases, and in that way, they are processed similarly to spoken language pronouns. Thus, despite the significant differences between signed and spoken pronouns due to the use of loci in signing space, the relevant aspect from the processor’s point of view is simply the indication of a referent (see Cormier et al. (2013) for discussion about the similarities and differences between signed and spoken pronouns).

14.3  Acquisition studies The acquisition of null arguments in sign languages has been investigated in various ways, including studies of first-​language learners, child bilinguals, and adult second-​ language learners. The studies have addressed several different research questions, which have focused more on acquisition issues than on the question of how null arguments are to be analyzed. These issues will be discussed in the context of each set of studies.

14.3.1  Acquisition of null arguments –​syntactic factors (Deaf native signers) It is well-​known that children acquiring a wide variety of languages tend to omit arguments early in their development. For example, monolingual English-​acquiring children produce utterances such as those in (4) (from Hyams 1986, citing Bloom et al. 1975). (4) a. b. c. d.

Play it Eating cereal See window No go in

A good deal of research has debated whether such utterances reflect a lack of the appropriate grammatical knowledge regarding the limitations on the use of null arguments in English (see Hyams (2011) and Valian (2016) for reviews). It has become clear, however, that the types of missing arguments in early child language do bear some reflection on the grammatical options of the adult language; for example, while English-​speaking children omit subjects, they do so less frequently than Italian-​speaking children (Valian 1991), and unlike Chinese-​speaking children, they rarely omit objects (Wang et al. 1992). In this context, the study of the acquisition of null arguments in ASL by Lillo-​Martin (1991) can be seen as another indication that children’s early null arguments reflect their broader knowledge of their target grammar, despite some numerical fluctuations. Lillo-​ Martin studied the use of null arguments in elicited narratives produced by deaf native signing children between the ages of 1;07 (years; months) -​8;11. She noted that there was

315

316

Diane Lillo-Martin

a parallel development between the use of null arguments and the use of verb agreement, as follows. In the earliest stage (one child age 1;07), a child in her study produced only very simple one-​or two-​sign utterances that would frequently omit arguments, such as those in (5). In these examples, the referent of the missing arguments could be understood from the broader linguistic context, but not from linguistic-​internal mechanisms (such as verbal morphology or an overt topic). (5)

(Monica, 1;07) a. BOY b. BALLOON c. C RY

At the next stage (starting with 2-​year-​olds), children produced sentences with overt or null arguments. Those with null arguments were similar to null-​argument sentences observed in English-​speaking children, in that they were not grammatically licensed; no verb agreement was observed. Examples are given in (6). (6)

(Steve, 2;03) a. aPRON OU N H AVE BALLOON . ‘He has balloons.’ b. GIV E [uninflected] BALLOON . ‘(He) gave (him) a balloon.’ c. HOLD, LET-​G O . ‘(He) held on (to it), then let go (of it).’

Near their third birthday, children started using verb agreement, though inconsistently. Sometimes the loci used with agreeing verbs were inconsistent; at other times the child signed with reference to locations in the book pictures, signing on the book itself. Null arguments were sometimes licensed, but their referent was not always made grammatically clear. See (7) for examples. (7)

(Steve, 2;11) a. SEE a WITH BALLOON . ‘(He) sees (-​him) with balloons.’ b. GIV E (2h)b BOY BALLOON [index to picture] ‘(He) gives the boy a balloon (he does).’

The next stage might be surprising. Around age 3;06, some children produced very few null arguments, instead repeating the overt arguments unnecessarily. They seemed to over-​use overt nouns or pronouns, in contexts where native signing adults would prefer null arguments. Examples are given in (8). (8)

(Maureen, 3;08) a. B OY WALK . ‘A boy is walking.’ b. B OY S EE [uninflected] BALLOON . BOY WANT BALLOON . ‘The boy saw a balloon. The boy wanted a balloon.’ 316

317

Null arguments: experimental perspectives

c.

M AN GIVE [uninflected] A BOY BALLOON .

d.

‘The man gives the boy a balloon.’ B OY L ET-​G O BALLOON G O-​AWAY . ‘The boy lets go and the balloon goes away.’

During this stage, children’s productions also generally lacked verbal morphology. Verbs were produced in ‘citation form’, without setting up referents in signing space. In other cases, even when verb agreement was used, arguments were still overtly repeated. Finally, at the ages 5–​6  years old, verbal morphology was used correctly and null arguments re-​emerged, properly licensed with agreeing verbs, as illustrated in (9). (9)

(Susan, 6;02) aP RONOUN

BOY WAN T PAIN T . aPAIN T b G IRL . THEN GIRL bPAINT a.

T HEN B OY aPOU R b S PILL-​O N -​H EAD. G IRL bPOUR a. THEN MOTHER cSCOLD a,b.

‘The boy wants to paint. (He-​) paints the girl. Then the girl paints (-​him). Then the boy pours on (-​her), spilling on the head. The girl pours on (-​him). The mother scolds (-​them both).’ Lillo-​ Martin (1991) interpreted this pattern as evidence that, as the children were developing in their ability to use verbal morphology, they were able to apply that ability to properly license null arguments. The stage at 3;06 was taken to indicate not a grammar that disallows null arguments, but rather one that recognizes the licensing requirement on them. Recognition of this requirement, and conservativity in production of unlicensed null arguments, was taken to indicate linguistic competence in advance of performance ability. Lillo-​Martin (1991) also considered the possibility that children’s early use of null arguments might be related to their growing competence with using (null) topic licensing. She observed that overt topics were rarely used in the first few stages, and in fact the null arguments were often produced without any overt nominal indicating the referent. Only in the last stage did children correctly use discourse-​topic licensing of null arguments. Note that although Lillo-​Martin’s focus was on the syntactic licensing and identification of null arguments, since the data she examined came from mini-​narratives, the discourse requirements for reference tracking (discussed in Section 14.1) would also be relevant. We discuss signing children’s development in this domain as well as early syntactic factors of null arguments in bimodal bilingual acquisition of ASL and English in the next subsection.

14.3.2  Null and overt arguments in reference tracking (Deaf and hearing native signers) Null arguments play multiple roles in languages, with specifics varying depending on licensing requirements. Since they typically are interpreted with respect to a particular linguistic context, researchers have frequently considered the use of null versus overt arguments in narratives, where reference tracking functions can be clearly seen. This is also an area for investigation of language development, since very young children often do not manage well with the requirements of cross-​sentence cohesion and successive elaboration of the contents of a narrative. 317

318

Diane Lillo-Martin

Morgan (2006) investigated the use of overt and null arguments for reference tracking in the narratives of 12 native or near-​native 4-​to 13-​year-​old children and 2 adults signing British Sign Language (BSL). He considered reference forms according to their function in the discourse: introduction, for the first mention of a character in a story; maintenance, for continued reference to a character in discourse focus; and re-​introduction, for a referent that has gone out of focus and returned. Adult BSL signers differentiated these functions in the use of overt versus covert arguments. Introduction was virtually always accomplished by an overt noun phrase; maintenance almost never used an overt noun phrase; re-​introduction used an overt noun phrase half of the time. Children also generally differentiated the functions, but they did not reach the extreme levels of the adults. The youngest group (ages 4–​6 years) used overt forms for introduction about 70% of the time –​much less than the adults. They used overt forms for maintenance 22% of the time –​much more than the adults. Finally, their use of overt forms for re-​introduction was only somewhat higher than adults’ (about 60%), making re-​introduction not very different from introduction for the children. Older age groups showed increasing use of overt noun phrases for introduction, approaching the adult level, and lower levels of overt noun phrases for maintenance; however, rather than approach the adult level, they used higher percentages of overt forms for re-​introduction. Morgan (2006) interprets these findings as indicating that children are more concerned with making characters explicit at the sentence level, and less able to control reference forms across a discourse. Morgan compares these results to the characterization of stages of the organization of reference forms in (hearing) English-​speaking children by Bamberg (1986). An extension of this research to native signers acquiring ASL, including both Deaf children and bimodal bilinguals, was conducted by Reynolds (2016). Reynolds (2016) compared the use of null and overt forms in the narrative productions of six Deaf children (ages 5;05–​7;10) of Deaf parents, in comparison to six bimodal bilingual children (ages 5;02–​8;02) acquiring both ASL and English, each of whom was observed two times, 18 months apart. Three of the bimodal bilingual children are hearing children of Deaf parents (Codas), and the other three are Deaf children of Deaf parents, using cochlear implants to access spoken language (DDCI). The first question to address is whether the Deaf children show expected patterns. Next, we discuss the patterns observed in the bimodal bilingual children, and what accounts for them. The Deaf children in Reynolds’ study produced overt forms for reference introduction over 90% of the time; they produced overt forms for maintenance 15% of the time; and they produced overt forms for re-​introduction 50% of the time. This pattern is very similar to the pattern observed for adults in the study by Morgan (2006), and also the adult pattern found for DGS by Perniss & Özyürek (2015) described in Section 14.1, though dissimilar to the pattern found for ASL-​signing adults by Frederiksen & Mayberry (2016), and unlike the pattern for any of the age groups of children in Morgan’s study (even though the oldest group in the latter was several years older than the oldest participants in Reynolds’ study). Overall, the Deaf participants showed good differentiation of noun phrase form associated with referential function. It was predicted that the bimodal bilingual children might perform differently from the Deaf participants. Reynolds (2016) presents bimodal bilingual children as heritage learners, in the sense that their home (heritage) language is not the same as the dominant community language (see also Chen Pichler et  al. 2017). Heritage speakers frequently show differences from monolinguals in particular domains, including over-​use of overt 318

319

Null arguments: experimental perspectives

forms in contexts where null forms would be used (Montrul 2004). Differences might also be expected due to increased influence of English at school and in other contexts, since it requires overt forms in more contexts than does ASL. In addition, it was expected that the degree of departure from ASL-​like structures toward more English-​like ones would increase over the two observation points 18 months apart. As predicted, the bimodal bilingual children used a greater number of overt forms than the Deaf children did. They used almost twice as many overt forms in re-​introduction contexts, and over twice as many overt forms in the maintenance contexts. Furthermore, the number of overt forms used increased from the first to the second observation of the bimodal bilingual children. Looking at the types of overt forms used, the bimodal bilinguals produced overt pronouns much more frequently than did the Deaf participants, whose overt forms were almost always nominal. Reynolds suggests that this may be due to influence from English, since English would almost always use overt pronouns in the contexts where ASL prefers null ones. This suggestion is supported by her observation that overt forms were used more frequently at the second observation than the first, when the bimodal bilinguals have had a sustained period of intense monolingual English exposure at school. Reynolds discusses this effect within the context of considering young bimodal bilinguals as heritage learners. Further support for the proposal that bilingualism may affect the proportion of null vs. overt forms used by bimodal bilinguals comes from a study by Koulidobrova (2017b). Koulidobrova’s main focus was on children’s English, but she also looked at the use of ASL null arguments in spontaneous production of two younger children, one of whom was also observed by Reynolds at a later age. Koulidobrova observed that the two children overall used about 40% null subjects, lower than the overall rate of null-​subject usage by the older Deaf children in Reynolds’ study. While unfortunately a quantitative comparison using similar spontaneous production data from younger Deaf signing children is not available, this rate of null subjects is also lower than that typically observed for children acquiring a null-​subject spoken language. Summing up, studies of the use of null arguments by signing children have looked at two major issues. First is the question of when native signers show evidence of understanding the licensing requirements on the occurrence of null arguments. The study by Lillo-​ Martin (1991) argued that at the earliest stages, null arguments are not used according to the grammatical requirements of ASL; rather, they are similar to null arguments used by English-​speaking children and others. However, she argued that at the next stage, around age 3;06, children’s non-​adult null-​argument usage could be explained by their developing verbal morphology; their use of overt arguments when verb agreement is unreliable is indicative of their knowledge of the licensing conditions; as they increase in accuracy of verbal morphology, they use appropriately sanctioned null arguments. The second issue concerns children’s developing ability to correctly select null vs. overt arguments as required by the discourse context, focusing on reference tracking in narratives. Morgan (2006) found that over time, children acquiring BSL became closer to adult-​like in their distribution of overt forms in introduction, maintenance, and re-​introduction contexts, although even the 11–​13-​year-​olds used overt forms in re-​ introduction much more than adults did. Reynolds (2016) found that 5-​to 7-​year-​old native Deaf signers patterned very much like the adults in Morgan’s study, though the ASL-​signing adults in the study by Frederiksen & Mayberry (2016) showed a surprisingly higher proportion of overt forms in the re-​introduction contexts. Reynolds also showed 319

320

Diane Lillo-Martin

some bilingualism effects in bimodal bilinguals, who were more likely to use overt forms in maintenance and especially in re-​introduction, displaying a pattern that appears to combine aspects of ASL and English. If native signing bilingual children show bilingualism effects including higher usage of overt forms, how do adult second-​language learners manage with null arguments when they learn a sign language? This is the question addressed by the two studies described in the following section.

14.3.3  Adult L2 learners There are few studies of adult learners of a sign language as a second language (L2) in any grammatical domain. When the learners’ first language is a spoken language, the new language is also in a new modality (M2), potentially leading to acquisition patterns not seen in speakers learning a second spoken language (Chen Pichler & Koulidobrova 2016). With this in mind, several studies have examined the use of null arguments in adult M2L2 learners of various sign languages.2 Bel et al. (2015) looked at M2L2 learners of Catalan Sign Language (LSC) whose first languages were Catalan and Spanish (hence technically these were L3 learners). These were advanced learners in their final year of interpreter training. L1 Deaf signers of LSC were also tested for comparison. Participants viewed a silent film and were asked to tell a story of a similar event in LSC. As might be expected, Bel et  al. found that for the native signers, overt elements (nominals and pronouns) were the preferred types used in referent introduction, and null pronouns were predominantly used for maintenance. All three types were used in re-​ introduction, with about two-​thirds overt and one-​third covert. The L2 signers followed the same pattern as the L1 signers for introduction, but they used more overt pronouns in maintenance and re-​introduction contexts. This pattern is like that found for L2 learners of spoken languages. Note that the study by Bel et al. demonstrates for a sign language that over-​use of overt forms by learners is not restricted to cases in which the L1 is a non-​ null subject language. Catalan and Spanish, like LSC, permit null subjects. Since over-​use of overt forms is also found in Spanish-​Italian bilinguals (Sorace & Serratrice 2009), it seems to be more a bilingualism phenomenon than one strictly connected to the properties of the L1. Frederiksen & Mayberry (2015) studied M2L2 learners of ASL in the same story re-​ telling task used in their (2016) study of native signers. The participants in this study were beginning to low-​intermediate level whose L1 was English. They performed similarly to native signers in strongly preferring overt nouns for introduction and zero anaphora for maintenance contexts (with the L2 participants using fewer classifiers than the L1 participants did). The L2 signers used fewer zero anaphors in the re-​introduction context, with greater use of classifiers here. This pattern is unlike that found by Bel et al. in that the learners did not display greater tendency for overtness in comparison to the native signers. However, Frederiksen & Mayberry point out that the L2 signers used re-​ introduction more frequently, and maintenance less frequently, than the native signers did. This was due in part to a technique used by the native signers by which a referent is held across a clause boundary (and therefore can be referred to using maintenance). They suggest that both over-​use of overt forms and low use of cross-​clause holds by L2ers are indications of a focus on sentence-​level planning by L2 learners in comparison to native signers or speakers. 320

321

Null arguments: experimental perspectives

This brief discussion of M2L2 learners’ use of null arguments in sign languages indicates that there are similarities between sign languages and spoken languages in the over-​use of overt forms (also observed in bimodal bilingual children). The modality-​ specific technique of holding a referent observed by Frederiksen & Mayberry (2015) is an indication that common underlying constraints may surface somewhat differentially across modalities, motivating further investigation of this understudied population.

14.4  Discussion and conclusion Experimental studies on sign language null arguments have increased our understanding of language processing and language acquisition. Overt and null arguments in sign languages involve mechanisms that on the surface appear quite different from those used in spoken languages: signers must mentally associate discourse referents with locations in signing space, communicate these associations, and maintain them through a discourse sequence (see Lillo-​Martin & Klima (1990) and Steinbach & Onea (2016) for some suggestions about formalizing this process). Despite these differences, the online processing of overt and null pronouns in ASL bears strong resemblances to what has been observed for spoken languages. The similarities and differences in processing studies begin to suggest answers to deeper questions about the mental activities revealed by these studies, which can be taken up again in the context of other studies of languages with null arguments (e.g., Sakamoto & Walenski 1998). Acquisition studies have contributed in several ways. The observation that syntactic structure acquisition is co-​dependent with the acquisition of morphology bears on theories of acquisition or architecture that might attempt to separate the two. At the discourse level, similarities in the distribution of overt and null elements speak to theories of referent access that start at a stage that is independent of language specific properties. Bilingual studies (both 2L1 and L2) reveal again the potential for modality-​specific properties to show up on the surface while common language characteristics and mental systems that permit language share deep properties

Notes 1 Some authors have discussed ways in which pointing signs in sign languages may be gestural and/​or have gestural components; see Meier & Lillo-​Martin (2010, 2013) and Cormier et al. (2013) for some discussion. One part of this discussion concerns whether there is any formal 2nd person /​3rd person distinction in ASL and other sign languages, although these interpretations may be assigned. In addition, Koulidobrova & Lillo-​Martin (2016) argue that (non-​first) points have the status of demonstratives rather than personal pronouns. As these issues are not addressed by the experimental studies reviewed here, they will be put aside for the current purposes. 2 See also Matsuoka & Lillo-​Martin (2017), who look at the interpretation of null and overt pronouns in M2L2 learners of Japanese Sign Language. The focus of their study is on the overt pronouns, with the null arguments as a comparison case, so it is not discussed in this chapter.

References Bahan, Benjamin, Judy Kegl, Robert G. Lee, Dawn MacLaughlin, & Carol Neidle. 2000. The licensing of null arguments in American Sign Language. Linguistic Inquiry 31. 1–​27. Bamberg, Michael. 1986. A functional approach to the acquisition of anaphoric relationships. Linguistics 24. 227–​284.

321

322

Diane Lillo-Martin Bel, Aurora, Marta Ortells, & Gary Morgan. 2015. Reference control in the narratives of adult sign language learners. International Journal of Bilingualism 19(5). 608–​624. Benedicto, Elena & Diane Brentari. 2004. Where did all the arguments go? Argument-​changing properties of classifiers in ASL. Natural Language and Linguistic Theory 22. 743–​810. Bloom, Lois, Patsy Lightbown, & Lois Hood. 1975. Structure and variation in child language. Monographs of the Society for Research in Child Development 40(2). 197. Chen Pichler, Deborah & Elena Koulidobrova. 2016. Acquisition of sign language as a second language (L2). In Marc Marschark & Patricia E. Spencer (eds), The Oxford handbook of Deaf studies in language: Research, policy, and practice, 218–​230. Oxford: Oxford University Press. Chen Pichler, Deborah, Wanette Reynolds, Jeffrey Levi Palmer, Ronice Müller de Quadros, Laura Viola Kozak, & Diane Lillo-​Martin. 2017. Heritage signers: Bimodal bilingual children from deaf families. In Jiyoung Choi, Hamida Demirdache, Oana Lungu, & Laurence Voeltzel (eds.), Language acquisition at the interfaces: Proceedings of GALA 2015, 247–​269. Newcastle upon Tyne: Cambridge Scholars Publishing. Cormier, Kearsy. 2012. Pronouns. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 227–​244. Berlin: De Gruyter Mouton. Cormier, Kearsy, Adam Schembri, & Bencie Woll. 2013. Pronouns and pointing in sign languages. Lingua 137. 230–​247. Emmorey, Karen. 1997. Non-​antecedent suppression in American Sign Language. Language and Cognitive Processes 12(1). 103–​112. Emmorey, Karen. 2002. Language, cognition, and the brain: Insights from sign language research. Mahwah, NJ: Lawrence Erlbaum. Emmorey, Karen & Brenda Falgier. 2004. Conceptual locations and pronominal reference in American Sign Language. Journal of Psycholinguistic Research 33(4). 321–​331. Emmorey, Karen & Diane Lillo-​Martin. 1995. Processing spatial anaphora: Referent reactivation with overt and null pronouns in American Sign Language. Language and Cognitive Processes 10(6). 631–​664. Emmorey, Karen, David Corina, & Ursula Bellugi. 1995. Differential processing of topographic and referential functions of space. In Karen Emmorey & Judy Reilly (eds.), Language, gesture, and space, 43–​62. Hillsdale, NJ: Lawrence Erlbaum. Emmorey, Karen, Freda Norman, & Lucinda O’Grady. 1991. The activation of spatial antecedents from overt pronouns in American Sign Language. Language and Cognitive Processes 6. 207–​228. Fodor, Janet Dean. 1989. Empty categories in sentence processing. Language and Cognitive Processes 4. 155–​209. Fodor, Janet Dean. 1993. Processing empty categories: A question of visibility. In Gerry Altmann & Richard Shillcock (eds.), Cognitive models of speech processing: the second Sperlonga meeting, 351–​400. Hillsdale, NJ: Lawrence Erlbaum. Frederiksen, Anne T. & Rachel Mayberry. 2015. Tracking reference in space: How L2 learners use ASL referring expressions. In Proceedings of the 39th Annual Boston University Conference on Language Development, 165–​177. Somerville, MA: Cascadilla Press. Frederiksen, Anne T. & Rachel Mayberry. 2016. Who’s on first? Investigating the referential hierarchy in simple native ASL narratives. Lingua 180. 49–​68. Hickmann, Maya & Henriëtte Hendriks. 1999. Cohesion and anaphora in children’s narratives: A comparison of English, French, German, and Mandarin Chinese. Journal of Child Language 26. 419–​452. Hyams, Nina. 1986. Language acquisition and the theory of parameters. Dordrecht: Reidel. Hyams, Nina. 2011. Missing subjects in early child language. In Jill de Villiers J & Thomas Roeper (eds.), Handbook of generative approaches to language acquisition, 13–​52. New York: Springer. Koulidobrova, Elena. 2012. When the quiet surfaces: ‘Transfer’ of argument omission in speech of ASL-​English bilinguals. Storrs, CT: University of Connecticut PhD dissertation. Koulidobrova, Elena. 2017a. Elide me bare: Null arguments in American Sign Language. Natural Language and Linguistic Theory 35(2). 397–​446. Koulidobrova, Elena. 2017b. Language interaction effects in bimodal bilingualism:  Argument omission in the languages of hearing ASL-​ English bilinguals. Linguistic Approaches to Bilingualism 7(5). 583–​613.

322

323

Null arguments: experimental perspectives Koulidobrova, Elena & Diane Lillo-​Martin. 2016. A ‘point’ of inquiry: The case of the (non-​)pronominal I X in ASL. In Patrick Georg Grosz & Pritty Patel-​Grosz (eds.), The impact of pronominal form on interpretation, 221–​250. Berlin: Mouton de Gruyter. Kwon, Nayoung & Patrick Sturt. 2013. Null pronominal (pro) resolution in Korean, a discourse-​ oriented language. Language and Cognitive Processes 28. 377–​387. Lillo-​Martin, Diane. 1986. Two kinds of null arguments in American Sign Language. Natural Language and Linguistic Theory 4. 415–​444. Lillo-​Martin, Diane. 1991. Universal Grammar and American Sign Language: Setting the null argument parameters. Dordrecht: Kluwer Academic Publishers. Lillo-​Martin, Diane & Edward S. Klima. 1990. Pointing out differences: ASL pronouns in syntactic theory. In Susan D. Fischer & Patricia Siple (eds.), Theoretical issues in sign language research, Volume 1: Linguistics, 191–​210. Chicago: University of Chicago Press. Lillo-​Martin, Diane & Richard P. Meier. 2011. On the linguistic status of ‘agreement’ in sign languages. Theoretical Linguistics 37. 95–​141. MacDonald, Maryellen C. 1986. Priming during sentence processing:  Facilitation of responses to a noun from a coreferential pronoun. Los Angeles, CA:  University of California PhD dissertation. Mathur, Gaurav & Christian Rathmann. 2012. Verb agreement. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 136–​157. Berlin: De Gruyter Mouton. Matsuoka, Kazumi & Diane Lillo-​Martin. 2017. Interpretation of bound pronouns by learners of Japanese Sign Language. In Mineharu Nakayama, Yi-​ching Su, & Aijun Huang (eds.), Studies in Chinese and Japanese language acquisition:  In honor of Stephen Crain, 107–​126. Amsterdam: John Benjamins. Meier, Richard P. & Diane Lillo-​Martin. 2010. Does spatial make it special? On the grammar of pointing signs in American Sign Language. In Donna B. Gerdts, John C. Moore, & Maria Polinsky (eds.), Hypothesis A/​hypothesis B:  Linguistic explorations in honor of David M. Perlmutter, 345–​360. Cambridge, MA: MIT Press. Meier, Richard P. & Diane Lillo-​Martin. 2013. The points of language. Humana.mente: Journal of Philosophical Studies 24. 151–​176. Meir, Irit. 2002. A cross-​modality perspective on verb agreement. Natural Language and Linguistic Theory 20. 413–​450. Montrul, Silvina. 2004. Subject and object expression in Spanish heritage speakers:  A case of morpho-​syntactic convergence. Bilingualism: Language and Cognition 7. 1–​18. Morgan, Gary. 2006. The development of narrative skills in British Sign Language. In Brenda Schick, Marc Marschark, & Patricia Spencer (eds.), Advances in sign language development in deaf children, 314–​343. Oxford: Oxford University Press. Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, & Robert G. Lee. 2000. The syntax of American Sign Language: Functional categories and hierarchical structure. Cambridge, MA: MIT Press. Nicol, Janet L. & David A. Swinney. 2002. The psycholinguistics of anaphora. In Andrew Barss (ed.), Anaphora: A reference guide, 72–​104. Hoboken, NJ: Wiley-​Blackwell. Padden, Carol A. 1983. Interaction of morphology and syntax in American Sign Language. San Diego, CA: University of California PhD dissertation. Perniss, Pamela & Asli Özyürek. 2015. Visible cohesion:  A comparison of reference tracking in sign, speech, and co-​speech gesture. Topics in Cognitive Science 7(1). 36–​60. Reynolds, Wanette. 2016. Early bimodal bilingual development of ASL narrative referent cohesion:  Using a heritage language framework. Washington, DC:  Gallaudet University PhD dissertation. Sakamoto, Tsutomu & and Matthew Walenski. 1998. The processing of empty subjects in English and Japanese. In Dieter Hillert (ed.), Sentence processing: A cross-​linguistic perspective (Syntax and Semantics, Vol. 31), 95–​111. Leiden: Brill. Sorace, Antonella & Ludovica Serratrice. 2009. Internal and external interfaces in bilingual language development:  Beyond structural overlap. International Journal of Bilingualism 13. 195–​210. Steinbach, Markus & Edgar Onea. 2016. A DRT analysis of discourse referents and anaphora resolution in sign language. Journal of Semantics 33(3). 409–​448.

323

324

Diane Lillo-Martin Thompson, Robin, Karen Emmorey, & Robert Kluender. 2006. The relationship between eye gaze and verb agreement in American Sign Language: An eye-​tracking study. Natural Language and Linguistic Theory 24(2). 571–​604. Valian, Virginia. 1991. Syntactic subjects in the early speech of American and Italian children. Cognition 40. 21–​81. Valian, Virginia. 2016. Null subjects. In Jeffrey Lidz, William B. Snyder, & Joe Pater (eds.), The Oxford handbook of developmental linguistics, 386–​413. Oxford: Oxford University Press. Wang, Qi, Diane Lillo-​Martin, Catherine T. Best, & Andrea Levitt. 1992. Null subject versus null object:  Some evidence from the acquisition of Chinese and English. Language Acquisition 2. 221–​254. Zwitserlood, Inge. 2012. Classifiers. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 158–​186. Berlin: De Gruyter Mouton.

324

325

15 RELATIVE CLAUSES Theoretical perspectives Chiara Branchini

15.1  Introduction: the cross-​linguistic investigation of relative constructions Subordination is a property that characterizes all natural languages. The crucial relevance of its assessment explains the vast cross-​linguistic literature in this field since the earliest studies in Generative Grammar. Assessing the existence of subordination has been even more important for sign languages, as a crucial step towards the recognition of their status as fully-​fledged natural languages (cf. Liddell 1978, 1980; Pfau & Steinbach 2016). This chapter focuses on a specific domain within the area of subordination, namely on relative constructions. Cross-​linguistically, these are complex sentences sharing some crucial properties:  (i) they are composed of a main and a dependent clause; (ii) they contain a constituent, the head (mainly of category NP), semantically and syntactically shared by both clauses but fulfilling independent syntactic roles in each clause; (iii) each clause includes a determiner phrase (DP) position syntactically and semantically bound to the head and potentially able to host it. Yet, beyond these common features, variation in semantic and syntactic properties intersects, thus yielding the different relativization strategies attested cross-​linguistically and potentially co-​existing within a given language (Keenan 1985). The typological variation observed has been classified on the basis of two criteria. On the one hand, there is the syntactic criterion concerning (i) the hierarchical relation between the relative clause (RC) and the main clause, (ii) the phonological realization of the head in each clause, and (iii) the syntactic category of the RC. The semantic criterion, on the other hand, refers to the semantic intersection (or lack thereof) between the material composing the RC and the head. While a syntactically based classification distinguishes between internally-​headed RCs, externally-​headed RCs, free RCs, and correlatives, the semantic criterion yields a three-​way semantic typology of restrictive RCs, non-​restrictive RCs, and maximalizing RCs. Other properties contributing to the attested cross-​linguistic diversity are the presence of relative markers (relative pronouns, complementizers, personal pronouns, resumptive pronouns) and the position of the RC within the sentence. Mirroring the great interest of spoken language linguists in investigating this domain of grammar, the literature on RCs in sign languages is rapidly growing. The first attempt, 325

326

Chiara Branchini

pursued by Thompson (1977), unsuccessfully looked for manual markers of relativization (such as relative pronouns and complementizers) as evidence for the presence of RCs in American Sign Language (ASL), thus incurring in the misconception that sign languages should exhibit similar morpho-​syntactic devices as those employed by spoken languages. The absence of manual markers of relativization led Thompson to conclude that ASL lacked subordination. As the subsequent literature has shown, Thompson’s conclusion was misguided, as he overlooked modality-​specific features that play a crucial role in marking RCs in sign languages (Liddell 1978, 1980). In line with a more general tendency to avoid manual functional markers of subordination, the sign languages studied to data employ non-​ manual markers (NMMs) as obligatory syntactic correlates of relativization. For many sign languages, raised eyebrows, squinted eyes,1 and head tilt spreading over the RC have been found to represent the only means to distinguish a relative construction from a coordination of clauses. In studies on spoken languages, various syntactic tests have been used as diagnostics to pin down the syntactic and semantic properties of RCs, thus allowing for their classification within traditional typologies of relativization. We shall see that the same tests can be applied to sign languages and that their results allow for the identification of the same typologies of relativization as proposed for spoken languages. Moreover, consistent with similar studies on spoken languages, recent research on the interface between prosody and syntax in sign languages offers new diagnostic tools for a fine-​grained analysis of clause boundaries. Research on spoken language prosody shows that pauses and tone variation mark clause boundaries, and that clauses form intonational phrases (Emonds 1976; Nespor & Vogel 1986; Selkirk 2005). Similarly, pauses in the signing stream and NMMs such as eye blink, head nod, facial expressions, and eye gaze direction have been proposed to signal clause and constituent boundaries and to mark phonological and intonational phrases in sign languages (see, e.g., Wilbur 1994; Sandler 1999; Herrmann 2010; Sze 2008a; Tang et al. 2010). This knowledge can be fruitfully applied to the study of RCs to identify the syntactic material composing the RC and the main clause, to determine the elements forming a constituent with the head, as well as the position of the RC in the sentence. Research on relativization in sign languages poses some theoretical challenges. In this chapter, I will attempt to: (i)

describe the modality-​specific features employed by sign languages when encoding the abstract concept of relativization; (ii) identify the core features of relativization displayed cross-​linguistically regardless of modality; ( iii) find out how sign languages can contribute to a deeper understanding of the phenomenon and whether modality-​specific features provide new evidence for the core syntactic and semantic properties displayed by RCs in the world’s languages. Section 15.2 provides a description of the syntactic typologies of RCs:  the first two sections address headed relatives:  internally-​ headed (Section 15.2.1) and externally-​ headed (Section 15.2.2) RCs; the last two sections address other types, namely free relatives (Section 15.2.3) and correlatives (Section 15.2.4). The properties characterizing the semantic distinction between restrictive and non-​restrictive RCs are addressed in

326

327

Relative clauses

Section 15.3. Section 15.4 investigates the syntactic and semantic relationship holding between RCs and topicalized constituents in sign languages. Section 15.5 sums up the core concepts presented in the chapter and points out avenues for further research in this area. For both the syntactic and semantic typology of relativization, we first provide a description of their general properties based on studies on spoken languages before turning to patterns observed in sign languages whose relative constructions have been described to date. Some diagnostic tests deriving from the literature on spoken languages are then applied to the sign language data, and the theoretical proposals advanced in the literature on sign language RCs are presented.

15.2  Syntactic typologies of relativization This section provides an overview of our current understanding of the syntactic typologies of relativization, gained from studies on typologically diverse spoken languages, and applies this knowledge to sign languages, based on the patterns reported in the available studies on the syntax of RCs in sign languages.

15.2.1  Internally-​headed relative clauses 15.2.1.1  Properties of internally-​headed relative clauses This relativization strategy is characterized by the fact that the head of the RC (always in bold in the examples) is phonologically realized inside the RC (always enclosed within brackets in the examples), according to its syntactic role within it. In the Japanese example in (1), the head keeki ‘cake’ surfaces inside the RC in object position. (1)

Yoko-​wa [[Taro-​ga sara-​no ue-​ni keeki-​o oita] -​no]-​o tabeta Yoko-​T OP Taro-​N OM plate-​G EN on-​L OC cake-​A C C put NMLZ -​A C C ate ‘Yoko ate a piece of cake which Taro put on a plate.’                               (Japanese, Shimoyama 1999: 147)

An IHRC has the distribution and syntactic category of a complex NP originating inside the main clause, in the position corresponding to the syntactic role that the head has in the main clause. In (1), the RC occupies the object position within the main clause. However, in some languages, the IHRC may be produced in a dislocated position.2 Williamson (1987), Watanabe (1992), and Bianchi (1999) claim that the head of IHRCs cannot be selected by a definite determiner inside the RC and, although the issue is controversial, Bianchi (1999) and Alexiadou et  al. (2000) argue convincingly that even in languages where a definite determiner selects the head, the latter lacks definiteness. The head can co-​occur with a relative pronoun or be bare, as in (1). A definite strong determiner can surface at the right edge of the RC, be it in the form of a free morpheme, as the in the Tibetan example in (2), or a clitic, as -​no in (1). (2)

[[Peemε thep khii-​pa] the] nee yin. Peem.ERG book.ABS carry-​PART the.ABS I.GEN .be ‘The book that Peem carried is mine.’              (Tibetan, Keenan 1985: 161)

327

328

Chiara Branchini

One final feature worth mentioning is that IHRCs allow for internal recursion, that is, the possibility for the RC to contain a head modified by another RC, a syntactic operation known as stacking, illustrated by the Mohave example in (3). (3)

[[tunay pi:pa  ?-​u:yu:-​ny] hatcoq  kyo:-​ny]-​c pos  ka?a:k-​k. yesterday  man I-​see-​D EM  dog bite-​D EM -​S UBJ  cat kick ‘The man I saw yesterday, that the dog bit, kicked the cat.’   (Mohave, Munro 1976)

IHRCs are also attested in sign languages.3 Liddell (1978, 1980) analyzes the ASL (SVO) sentence in (4a) as a relative construction including an IHRC minimally different from a coordination of clauses (4b) due to the presence of specific NMMs (glossed as ‘rel’) –​ backward head tilt, raised eyebrows, squinted eyes, and tensed upper lip –​that spread over the entire RC.4                  

(4)

a.

          rel

[RECEN TLY DOG CH AS E+ CAT] COME HOME

‘The dog which recently chased the cat came home.’ b.

(ASL, Liddell 1978: 66)

RE CEN TLY D OG CH AS E+ CAT COME HOME

‘The dog recently chased the cat and came home.’

(ASL, Liddell 1978: 71)

IHRCs have also been detected in Italian Sign Language (LIS, Branchini & Donati 2009; Branchini 2014), Catalan Sign Language (LSC, Mosella 2011, 2012), and Turkish Sign Language (TİD, Kubuş 2016; Kubuş & Nuhbalaoğlu 2018), all of which are SOV languages, as well as in Hong Kong Sign Language (HKSL, Tang et al. 2010; Tang & Lau 2012), which is mostly SVO, but allows for SOV under some conditions (Sze 2003, 2008b). Common features of IHRCs in these sign languages are: (i) the head occupies a position inside the RC; (ii) the use of dedicated non-​manuals marking the RC (cross-​linguistically shared NMMs are raised eyebrows, squinted eyes, (forward or backward) head tilt); (iii) the (optional) presence of a determiner-​like element co-​referent with the head, marked by slightly different or intensified NMMs, that appears either next to the head or in clause-​final position within the RC; (iv) a tendency to produce the RC in sentence-​initial position, or, less frequently, in sentence-​final position. For some sign languages (e.g., LIS, HKSL, and ASL), a base position inside the main clause is also (marginally) acceptable. In the HKSL example in (5), the RC is in sentence-​initial position, the head FEMALE is internal to the RC, and it is marked by the determiner element IX a appearing next to the head and co-​referent with it. In (6) from LSC, the RC sits in sentence-​final position. The head BOOK is internal to the RC, and it is marked by the determiner-​like element MATEIX produced at the right periphery of the RC.                

(5)

            rel

[ YE STE RDAY IX a FEMALE CYCLE] IX 1 LETTER SEND a

‘I sent a letter to that lady who cycled yesterday.’ 

(6)

(HKSL, Tang & Lau 2012: 360)

                   

   

J OAN BRIN G N OT  

[BOOK YES TERDAY BUY (MATEIX)]

            rel

‘Joan hasn’t brought the book that he bought yesterday.’ 

(LSC, Mosella 2012: 203)

As for feature (iii) above, it should be underlined that many studies on IHRCs in sign languages claim that the nominal status of the RC is achieved through the (optionally 328

329

Relative clauses

spelled out) determiner-​like element co-​referent with the head appearing at the right periphery of the RC (see Branchini & Donati (2009) and Branchini (2014) for LIS; Mosella (2011) for LSC; Kubuş (2016) for TİD; Galloway (2012) for ASL).5 Galloway (2013) claims that in ASL IHRCs, the determiner may also precede the head co-​referent with it, but then it lacks the nominalizing function that only the clause-​final determiner has. The latter is generally produced with squinted eyes and raised eyebrows. When the determiner is not phonologically spelled out, or appears next to the head, spreading of the NMMs over the whole RC is obligatory, reaching their maximal intensity at the right periphery of the clause. By means of spreading, the NMMs alone are able to mark the RC, endowing it with the necessary nominal features. This distribution seems to suggest that, cross-​ linguistically, the [+rel] feature of IHRCs is located at the right periphery of the RC. The presence of a nominalizing determiner in sign languages can be compared to the presence of a strong determiner (2) or nominalizer (1) marking the right edge of IHRCs in spoken languages. Further evidence for the nominalizing function of these determiners in IHRCs is provided by Galloway (2012), who further claims that ASL correlatives employ a set of demonstratives manually and non-​manually different from the nominalizing determiner appearing in ASL IHRCs (see Section 15.2.4 for further discussion of correlatives).

15.2.1.2  Some diagnostic tests for IHRCs6 •  Head internal to the RC Evidence that the head is internal to the RC is provided by (i) spreading of the ‘rel’ NMMs over both the head and the RC, and (ii) scope of temporal adverbials preceding the head over the RC, but not over the main clause. This is evidence that the adverbial and, by implication, the head following it are part of the RC; for illustration, see (4a) and (6) above.

•  Impossibility of head reduplication IHRCs do not allow reduplication of the head inside the main clause.

•  Embedding of the RC in the main clause Certain characteristics of dislocated RCs indicate that the RC has been moved from a position internal to the main clause, where it can be reconstructed, thus providing evidence for its embedded status: scope of matrix negation (NEVER in (7a)) over the RC, sensitivity to islands, namely the impossibility to extract the RC when embedded within another RC (7b), and binding of a variable (IX 3j) by a quantifier (NOBODY ) (7c).            

(7)

a.

            rel

[ONE WOMAN MAK E-​U P N OT PE] IX 1 MEET NEVER

‘I never met a woman who doesn’t wear make-​up.’ b. * [CHILDi COMPETITION WIN PEi]b [TEACHERa PRIZE [CHILD COMPETITION WIN PE]b aGIV E b

PE a] np IX 1 K N OW

‘I know the teacher who gave the prize to the child who won the competition.’              

c.

              rel

[ PROFES S OR IX 3j COMPUTERk BU Y PE k] STUDENT j STEAL NOBODY j

‘No student steals the computer that his professor bought.’ 329

330

Chiara Branchini

•  Nominal status of the RC The possibility of modifying the RC by an ordinal (8) and sensitivity to islands (7b) are evidence for the nominal status of the RC.        

(8)

          rel

[ F IRST WOMAN K IS S PE] N OW BAN K WORK

‘The first woman I kissed now works in a bank.’

•  Stacking IHRCs allow stacking, as shown in (9).             (9)

                  rel

[[ VASEi SEE D ON E PE i] TODAY IX 1 BU Y P E i] EXPENSIVE

‘The vase that I saw that I bought today was expensive.’ A further diagnostic that has been employed tests whether the RC (together with the rel-​NMMs) can be produced in isolation. This diagnostic, however, only indicates whether the potential RC is dependent on the clause next to it (a feature typical of RCs) or independent (hence coordinated to it), without providing evidence for its syntactic typology.

15.2.1.3  Theoretical accounts of IHRCs7 The first proposal to derive IHRCs in a sign language is offered by Liddell (1978), who claims that ASL IHRCs are merged inside the main clause as its argument. Galloway (2012) argues that IHRCs in sentence-​initial position are topicalized. A more detailed analysis is proposed for IHRCs in LIS (Branchini & Donati 2009; Branchini 2014) and LSC (Mosella 2012), capitalizing on the movement of the determiner-​like element (PE and MATEIX, respectively) from a position inside the RC (next to the head) to the C° position of the RC, thus endowing it with the necessary nominal features and turning it into a complex NP. Moreover, both analyses propose that postposed RCs target SpecCP (that is, the specifier of the projection hosting wh-​phrases, which is assumed to be on the right in both languages), while fronted RCs occupy a topic position. The structure in (10b) illustrates the derivation of fronted LIS IHRCs, as in (10a), proposed in Branchini (2014: 214).      

(10) a.

      rel

[ CHILDi PLAY PE i] TEACH ER S COLD

‘The teacher scolds the child who plays.’

330

331

Relative clauses TopP

b. CP/DP C

IP

Top’

PE

IP

CHILD PE PLAY

I’

TEACHER

VP

I V’

TEACHER

CP/DP C’/D’ IP

V SCOLD

C°/D° PE

CHILD PE PLAY

Brunelli (2011), on the other hand, offers a unified analysis of LIS IHRCs and EHRCs. Strictly following Kayne’s (1994) antisymmetric framework, he assumes that the head merges both external and internal to the RC. Different raising operations then take place leading to deletion or identification of either the internal or external head, resulting in an EHRC or IHRC. He further interprets the determiner PE as an anaphoric demonstrative that either remains in situ or raises from the RC inside the relative DP. Raising of PE is followed by remnant movement of the RC. In EHRCs, the external head raises too, deleting the internal head.

15.2.2  Externally-​headed relative clauses 15.2.2.1  Properties of externally-​headed relative clauses The distinctive feature of EHRCs is the realization of the head only inside the main clause, in a position determined by its syntactic function within this clause. Inside the RC, a phonological gap (‘e’) is found in the position of the missing head. In (11), the head man is the object of the main clause but the subject of the RC. (11) Robin met the man [that e saved his wife] Cross-​linguistic variation is observed regarding the position of the RC with respect to the head: the RC can follow the external head, in which case we are dealing with a postnominal RC, as in (11); or it can precede the head, yielding a prenominal RC, as in the Japanese example in (12), where the RC precedes the head kukkii ‘cookie’. Extraposition of the RC to the right of the main clause is also attested.

331

332

Chiara Branchini

(12) Taro-​wa [Yoko-​ga reezooko-​ni irete-​oita] Taro-​T OP  Yoko-​N OM  refrigerator-​L OC  put-​AUX kukkii-​o   hotondo  paatii-​ni  motte  itta. cookie-​A CC   most party-​to   brought ‘Taro brought most cookies that Yoko had put in the refrigerator to the party.’ (Japanese, Shimoyama 1999: 150) In the main clause, the head can be selected by either a definite or indefinite determiner. In (12), the head kukkii is selected by a definite determiner, the quantifier hotondo ‘most’, while in (13) the head neko ‘cat’ is selected by the indefinite wh-​determiner dono. (13) Taro-​wa [Yoko-​ga turete kita] dono neko-​ga higedasita Taro-​T OP  Yoko-​N OM  brought  along  which  cat-​N OM  ran-​away ka  sirigatte iru. Q  want-​to-​know  ‘Taro wonders which cat that Yoko brought along ran away.’                             (Japanese, Shimoyama 1999) The RC of EHRCs can be introduced by a declarative complementizer, like that in (11), by a relative pronoun, usually in the shape of a wh-​element, like whom in (14), or by neither one of them, as in (15). In languages with a rich morphology, the relative pronoun may agree in gender and number with the external head and can be marked for case according to the syntactic role of the head in the RC. In the English example in (14), for instance, the wh-​relative pronoun whom displays only the accusative case feature. (14)

The actor [whom the committee granted an award] arrived late.

(15)

The woman [the children hugged] was wearing a funny hat.

As previously observed for IHRCs (Section 15.2.1), Bianchi (1999) and Alexiadou et al. (2000) show that the DP inside the relative clause in EHRCs lacks definiteness. Finally, just like IHRCs, EHRCs allow stacking (16). (16)

The book [that I bought in Paris [that my daughter loves to read]] was awarded a prize.

Similar to IHRCs, the RC of EHRCs is a complex NP. Cross-​linguistically, languages may have both IHRCs and EHRCs, as shown for Japanese above (compare examples (1) and (12)) and, as will soon become clear, American Sign Language. Turning now to sign languages, EHRCs are attested in ASL (Liddell 1978, 1980; Galloway 2012, 2013), Brazilian Sign Language (Libras, Nunes & Quadros 2004), German Sign Language (DGS, Pfau & Steinbach 2005), TİD (Kubuş 2016; Kubuş & Nuhbalaoğlu 2018), LIS (Bertone 2007; Brunelli 2011; Branchini & Mantovan 2015; Branchini 2017), Sign Language of the Netherlands (NGT, Brunelli 2011), and HKSL (Li 2013) –​of which the first two have SVO word order, the next four SOV, and HSKL, as mentioned before, allows for both orders. Shared features of EHRCs in these sign languages are: (i) the position of the head outside the RC; (ii) the presence of a relative pronoun co-​referent with the head, as in DGS (17), or lack of any relative pronoun;8 (iii) 332

333

Relative clauses

dedicated non-​manuals marking either the entire RC or only the relative pronoun (shared NMMs are raised eyebrows);9 (iv) in situ postnominal position of the RC, yet fronted or extraposed RCs are also attested.10          

(17)

        

YE STE RDAY MAN IX 3 

  br

[RPRO-​H 3 CAT S TROKE] ARRIVE

‘The man who is stroking the cat arrived yesterday.’ * ‘The man arrives who stroke the cat yesterday.’                         (DGS, adapted from Pfau & Steinbach 2005: 513)

15.2.2.2  Some diagnostic tests for EHRCs •  NMMs and temporal adverbials In EHRCs, the ‘rel’ non-​manuals marking the RC do not spread over the external head. Moreover, temporal adverbials preceding the head have scope only over the main clause, not over the RC, as is evident from the two translations provided for (17). Furthermore, tests similar to those employed for IHRCs (Section 15.2.1.2) may be applied to the analysis of EHRCs, for instance, diagnostics verifying the nominal status of the RC, the possibility of stacking, and the embedded status of the RC within the main clause.

15.2.2.3  Theoretical accounts of EHRCs11 As mentioned in Section 15.2.1.3, Liddell (1978) proposes that ASL EHRCs derive from IHRCs through promotion of the internal head to a position external to the RC. In the same section, it was pointed out that Brunelli (2011) offers a unified analysis of LIS IHRCs and EHRCs in the spirit of antisymmetry. In their analysis of DGS EHRCs, Pfau & Steinbach (2005) assume that the head is base-​generated outside the RC, which is adjoined to the DP it modifies. Raising of the relative pronoun within the RC to the specifier of the highest topic phrase allows checking of the [+rel] feature, which is realized by the obligatory NMMs that extend only over the relative pronoun, as in example (17). (18) illustrates the derivation for the material inside the RC proposed by Pfau & Steinbach (2005: 513). TopP

(18) Spec rel

RPROi

Top’ Top°

TopP Spec

Top’ Top°

… VP V

ti DP CAT

333

V° STROKE

334

Chiara Branchini

Fronted EHRCs are assumed to be topicalized together with the head noun, while the RC may be extraposed by itself to the right, as shown in (19). (19)

IX 1 MAN IX 3 [ t] i LIK E 1PAM 3 [RPRO-​H 3 CAT STROKE] i

‘I like the man who is stroking the cat.’               (DGS, adapted from Pfau & Steinbach 2005: 515) Li (2013) derives postnominal EHRCs in HKSL, like the one in (20), by proposing that the relative DP, which is headed by the relative pronoun and takes the head as its complement, is merged inside the RC and subsequently moves to SpecCP on the right to check its [+wh] feature. The head is then raised to an external DP position (on the left) selecting the RC, to check a strong N-​feature in the external D. In this position, the head is marked with raised eyebrows (‘br’), which, according to Li, is the non-​manual realizing the strong N-​feature, while the relative pronoun is marked by the ‘rel’ NMMs associated with the [+wh] feature.  

(20)

br

SONi 

     



rel

[MOU S E BITE IX i] FATH ER j PRIVATE_​YACHT 3j​GIVE ​3i

‘The father gave a private yacht to the son whom the mouse bit.’                 (HKSL, adapted from Li 2013: 23)

15.2.3  Free relatives 15.2.3.1  Properties of free relatives The defining property of free relatives (FRs) is the lack of an overt head to refer to. Instead of an overt head, the RC may be introduced by a wh-​element (21) (corresponding to the wh-​interrogative or relative pronoun, depending on the language) or may not display any overt material connected to the relative DP. (21) [Who knows the answer] should help the students. As opposed to EHRCs, FRs do not allow a complementizer to introduce the RC, and they do not allow stacking. Some languages allow upward pied-​piping, namely pied-​ piping of the material governing the wh-​pronoun, the preposition to in (22a) and the Italian preposition con in (22b). On the other hand, downward pied-​piping, that is, pied-​ piping of the material selected by the wh-​determiner, like the nominal restriction thing in (23), is cross-​linguistically disallowed in FRs. (22) a. The girls have talked [to whom I apologized]. b. Il rettore  vuole parlare [con chi ha divulgato  la notizia]. the  dean want.3SG  talk.IN F  with  who  has spread the  news ‘The dean wants to talk to whom has spread the news.’          (Italian) (23) * They allowed me to bring [what thing I wanted].

334

335

Relative clauses

A further restriction FRs display is that of being subject to the matching effect. Given the lack of an overt head and of a determiner connected to the head within the main clause, the wh-​relative determiner is the only material shared by both clauses; it thus carries a double syntactic role, that is, a different one in each clause. In many languages displaying overt case morphology on the wh-​relative determiner, the determiner must match in category and case with the syntactic role it carries in both the RC and the main clause. Violation of the matching effect leads to ungrammaticality, as shown in (24), where the accusative case marking on the wh-​relative pronoun matches with the verb like inside the FR, but not with the verb talk in the main clause, thus requiring the preposition to. (24) * I will talk [whom Anna likes]. Similar to EHRCs and IHRCs, FRs are complex nominal clauses that originate inside the main clause. As for sign languages, this syntactic type has to date only been described for LIS (Branchini 2009, 2012).12 In LIS, FRs: (i) occupy a rigid sentence-​initial position; (ii) lack an overt head, in its place, a wh-​determiner marks its right periphery (25); (iii) are marked by the same non-​manuals as LIS headed RCs; (iv) cannot feature the determiner-​ like element PE that is observed in IHRCs.      

(25)

        rel

[E XAM DONE WH O] EXIT CAN

‘Who has taken the exam can go out.’   

  (LIS, Branchini 2014: 285)

As observed for spoken languages, it might be the case that not all wh-​determiners are allowed to mark FRs. According to Branchini (2009, 2012), LIS FRs disallow the presence of the wh-​determiners H OW-​M U CH /​H OW-​M ANY and WHAT . FRs may also appear in a complex construction, namely in the equivalent of pseudoclefts, as reported, for instance, for LIS (Branchini 2014) and ASL (Petronio 1991; Wilbur 1994, 1996; Wilbur & Patschke 1999; Grolla 2004; among others).13 In LIS pseudoclefts, the sentence-​initial FR is marked by the same non-​manuals that also mark LIS FRs and IHRCs, and it displays a wh-​determiner marking its right periphery, as shown in (26).14    

(26)

          rel

[ANNA IX 3 LEAVE WH EN ] TOMORROW

‘When Anna leaves is tomorrow.’     

(LIS, adapted from Branchini 2014: 278)

15.2.3.2  Some diagnostic tests for FRs •  Lack of an overt head FRs do not allow the overt realization of the head together with the wh-​determiner (27). (27) * [STUDE N T

EXAM D ON E WH O] G O-​O U T CAN

335

(LIS, Branchini 2012)

336

Chiara Branchini

•  Embedding of the RC A way to investigate subordination in dislocated FRs is through the scope of negation and quantifier binding. If a negative element modifying the main clause predicate has scope over the sentence-​initial RC (28), this is evidence that the latter was initially merged inside the main clause.      

(28)

      rel        

      neg

[ E XAM D ON E WH O]  PAOLO K N OW N OT

‘Paolo doesn’t know who has taken the exam.’15  

(LIS, Branchini 2009: 108)

Evidence for the subordinated status of the RC is also provided by the possibility for a quantifier belonging to the main clause (ALL) to bind an element within the RC (WHO), as in (29).    

(29)

        rel

[ E XAM D ON E WH O] PAOLO i IX 3i K N OW ALL

‘Paolo knows all those who have taken the exam.’   

(LIS, Branchini 2009: 108)

•  Downward pied-​piping The wh-​element cannot pied-​pipe a lexical restriction (30).                       rel

(30) * [ PAOLO LIK E [H OU S E WH ICH ]] IX 1 S EE *‘I saw which house Paolo likes.’   

DONE

(LIS, Branchini 2012)

Further diagnostics for FRs involve the impossibility of stacking and evidence for the nominal status of the RC, as was shown in Section 15.2.1.2 for IHRCs.

15.2.3.3  Theoretical accounts of FRs16 In her analysis, Branchini (2012) proposes that LIS FRs originate inside the main clause and are then dislocated to its left periphery. Within the FR, the wh-​element is a determiner that originates as an argument/​adjunct of the FR predicate. By moving to the C° position of the clause, at the right periphery, it endows the FR with the necessary nominal features.

15.2.4  Correlative clauses 15.2.4.1  Properties of correlative clauses The last relativization strategy we address, correlative clauses, differs from both headed RCs and FRs in three main respects: (i) the phonological realization of the head, (ii) the structural relation between the main clause and the RC, and (iii) the syntactic category of the (cor)relative clause. As for (i), both clauses contain a co-​referent DP constituent;

336

337

Relative clauses

while the relative DP is interpreted as indefinite, the matrix DP has a definite interpretation. The determiner of each constituent –​usually a wh-​element in the RC, a demonstrative in the main clause (see the Hindi example in (31)) –​is obligatorily spelled out in both clauses, while the head can be produced in both of them, in either one of them (31), or it can be omitted in both. As for (ii), the two CPs are adjoined clauses; as such, the RC is never produced inside the main clause as its internal constituent.17 The order of the two clauses may vary, as shown in (31). Finally, as for (iii), the RC is a bare CP. khaRii hai] vo laRkii  standing  is D EM  girl b. vo laRkii lambii  hai [jo DEM  girl tall is REL ‘The girl who is standing is tall.’       

(31) a. [jo

RE L

lambii tall khaRii standing     

hai. is hai]. is (Hindi, Dayal 1991: 647)

Correlatives do not allow stacking operations, but they do allow the presence of multiple wh-​constituents in the correlative clause, each of which is associated with a DP in the main clause. The existence of correlatives has been proposed for LIS (Cecchetto et al. 2006)18 and ASL (Coulter 1983; Fontana 1990; Neidle 2002; Galloway 2012). Common features of correlatives in both sign languages are: (i) lack of nominal features of the (cor)relative CP; (ii) a rigid sentence-​initial position of the (cor)relative clause; (iii) obligatory (ASL) or optional (LIS) presence of a demonstrative pronoun in the main clause, co-​referent with the head (referred to as the ‘correlate’); and (iv) the NMM raised eyebrows (‘br’) spreading over the (cor)relative clause. For ASL, Galloway (2012) also reports the NMM wrinkled nose (‘wr’) spreading over the correlate in the main clause (32). While in Cecchetto et  al.’s description of the construction, the head can only be produced inside the (cor)relative clause, Galloway shows that in ASL correlatives, the head (BOOK) can either be produced in one of the two clauses (32a), or in both of them (32b) (P T stands for a deictic pointing sign).  

(32) a.

                br

[DOC TOR BORROW PT BOOK]

   

  wr

TH AT ptbook MISSING

‘The book the doctor borrowed is missing.’  

b.

              br  

      wr

[P T GIRL BORROW BOOK]  TH AT BOOK GONE

‘The book the girl borrowed is missing.’                     (ASL, adapted from Galloway 2012, example (16))

15.2.4.2  Some diagnostic tests for correlatives Useful diagnostics to test the presence of correlatives are: (i) the realization of a determiner, the correlate, inside the main clause in the position of the co-​referent head in this clause; (ii) the possibility of overtly realizing the head in both clauses; (iii) the diagnostics (illustrated in Section 15.2.1.2) for verifying the lack of nominal features of the (cor)relative clause as well as the impossibility of being embedded inside the main clause; and (iv) the impossibility of stacking.

337

338

Chiara Branchini

15.2.4.3  Theoretical accounts of correlatives Coulter (1983) interprets ASL RCs as conjoined clauses whose relativized argument in the second clause is deleted. Fontana (1990) follows Coulter’s conjoined analysis and further proposes that the RC is left-​adjoined to the main clause and binds a pro in argument position within the main clause. According to Fontana, ASL RCs are similar to left-​dislocated structures organized in terms of topic-​comment structures. Cecchetto et al.’s (2006) proposal for the derivation of LIS correlatives is schematized in (33). (33) [IP [CP … NPi ti … PE i] [IP … PRON OU N i …]] In their analysis, the RC and the main clause are conjoined. Within the RC, the demonstrative-​like element PE originates in a position next to the head and moves to SpecCP.19 By doing so, PE scopes over the clause, thus taking it as its argument and turning the RC into a generalized quantifier.20 As shown in (33), the main clause follows the (cor)relative clause and hosts an optional demonstrative PRONOUN that is co-​referent with the head noun produced in the preceding clause.

15.3  Semantic typologies of relativization This section presents the features that characterize restrictive and non-​restrictive RCs in spoken languages, as well as pioneering studies carried out in this domain on some sign languages. Within the semantic distinction of RCs, a third type is represented by maximalizing relative clauses; this type, however, will not be addressed in this chapter. Syntactic and semantic typologies of relativization do not univocally match. Given a syntactic type, its semantic interpretation does not follow straightforwardly. However, attempts have been made in the literature to establish a correlation between syntactic and semantic criteria by showing that the material contained inside and outside the RC plays a role in the interpretation of the head (cf. Grosu & Landman 1998; de Vries 2002).

15.3.1  Restrictive relative clauses 15.3.1.1  Properties of restrictive relative clauses Restrictive relative clauses (RRCs) restrict the class of entities denoted by the head by identifying it as the specific referent of which the RC predicates something. Semantically, RRCs are sets intersecting with the set denoted by the head, thus establishing the restriction of the main clause determiner (Partee 1976[1973]). In (34), for instance, the RC restricts the set of women to those that actively participated in the demonstration. (34) The women [who actively participated in the demonstration] were persecuted. In order to intersect with the set denoted by the RRC, the head must be non-​specific. RRCs can be introduced by either a complementizer, a wh-​relative determiner (as in (34)), or by no overt material. The head of a RRC forms a constituent with the RC and can be reconstructed in a position internal to it, suggesting it is base-​generated inside the RC.

338

339

Relative clauses

A restrictive interpretation of RCs has been proposed for basically all of the sign languages studied to date: ASL IHRCs and EHRCs (Liddell 1978, 1980; Galloway 2012); DGS EHRCs (Pfau & Steinbach 2005); Israeli Sign Language (ISL) RCs21 (Dachkovsky & Sandler 2009); LIS IHRCs (Branchini & Donati 2009; Branchini 2014), EHRCs (Brunelli 2011), and FRs (Branchini 2012); LSC IHRCs (Mosella 2011, 2012); and TİD IHRCs (Kubuş 2016).22 A  point worth mentioning concerns the correlation observed between restrictivity and the NMM ‘squinted eyes’ (Dachkovsky & Sandler 2009; Brunelli 2011; Kubuş & Nuhbalaoğlu 2018). Dachkovsky & Sandler (2009) claim that in ISL, squinted eyes are associated with knowledge that is retrievable and shared between signer and interlocutor, but not immediately accessible to them. As such, this NMM is also present in other syntactic environments that share a similar semantic import, such as counterfactual conditionals, less accessible topics, and temporal clauses referring to a remote past. This proposal seems to match the cross-​linguistic observation that squinted eyes, together with raised eyebrows, mark RRCs but not non-​restrictive RCs (NRRCs). A final observation, which will turn out to be interesting when comparing RRCs with NRRCs, concerns the distribution of prosodic markers in this semantic type. Previous studies on sign language prosody (e.g., Nespor & Sandler (1999) and Sandler (2006) for ISL; Baker & Padden (1978) and Wilbur (2000) for ASL) have found that phonological and intonational phrase boundaries are marked by specific manual and non-​manual signals. Both types of phrase boundaries are marked manually by a pause and a hold or reduplication of the last sign. In addition, intonational phrase boundaries are delineated by a change in NMMs, such as a change in head/​body posture and facial expression, as well as the presence of specific NMMs, e.g., by an eye blink between two intonational phrases. Returning to RCs, shared prosodic markers identified in studies on RRCs are a head nod, an eye blink, and a pause between the RC and the main clause; at least in HKSL EHRCs, an eye blink may also be produced between the external head and the RC (Li 2013).23

15.3.1.2  Some diagnostic tests for RRCs24 •  Pronominal and proper name head As opposed to NRRCs, RRCs can neither modify pronominal (35) nor proper name (36) heads.                           rel

(35) * [YE STE RDAY IX2i FALL-​O FF BIK E PE i] TODAY NEW GLASSES BUY WANT * ‘You that yesterday fell off the bike today want to buy new glasses.’              

              rel

(36) * [MARIAi CAK E COOK LIK E PE i] PREPARE DONE *‘Maria who likes to bake cakes has prepared one.’

•  Quantified head As opposed to NRRCs, RRCs can modify a quantified head (Ross 1967).

339

340

Chiara Branchini

•  Ordinal head An ordinal preceding the head of a RRC modifies the whole RC (8), while an ordinal preceding a NRRC only modifies the head.

•  Matrix negation As opposed to NRRCs, RRCs can be in the scope of matrix negation (Dermidache 1991), see (7a).

•  Intentional verbs RRCs can be in the scope of intentional verbs (Zhang 2001), as shown by (37), which involves the intentional verb think, while in NRRCs, only the head, not the RC, is in the scope of intentional verbs.                                    

(37)

GIANNI TH IN K [MENi IX 3i CAR k CL.BIG -​C AR k BUY k 

rel PE i] MARIA LIKE

‘Gianni thinks that Maria likes men who buy big cars.’

•  Ellipsis The VP targeted by VP ellipsis may include the head and the RRC (38), but not a NRRC.  

(38)

              rel

[CAKEi IX i IX 1 COOK PE i] S IS TER IX 1 LIKE BROTHER NOT

‘My sister likes the cake that I bake, my brother does not.’

•  Sentential adverbs As opposed to NRRCs, RRCs do not allow for the presence of sentential adverbs of modification, such as BY-​T H E-​WAY in (39) (Ogle 1974).  

                          rel

(39) * [WOMANi MAN BY-​T H E-​WAY K IS S PE i] PASTA MAKE * ‘The woman who by the way the man kisses makes pasta.’

•  Any category While the head of a NRRC can be of any syntactic category, as in the English example (40a), where the head is the adjective intelligent, only NP heads can be modified by RRCs, as shown by the ungrammaticality of (40b), where the head of the RRC is the adjective INTE L L IGENT (Sells 1985). (40) a.

My sister is intelligent, which my brother never is.  

                    rel

b. * [ SIS TER IX 1 INTELLIGENTi PE i] BROTHER IX 1 NEVER * ‘My sister is intelligent which my brother never is.’ 340

341

Relative clauses

15.3.1.3  Theoretical accounts of RRCs The structural derivations that have been proposed in the literature for this semantic type have already been sketched in subsections within Section 15.2, according to the syntactic type the relative construction belongs to.

15.3.2  Non-​restrictive (or appositive) relative clauses 15.3.2.1  Properties of non-​restrictive relative clauses The information provided by NRRCs is not crucial to unequivocally identifying the head; rather, they provide additional information on the referent. Owing to this property, NRRCs have often been assimilated to parentheticals and independent discourse sentences (e.g., Emonds 1979; McCawley 1982; Grosu 2002). Consequently, NRRCs do neither contribute to placing a restriction on the external determiner nor to restricting the set of entities in the world. (41) entails that all women participated in the demonstration, and all were persecuted. As opposed to the RRC in (34), there are no other women present in the discourse context that were not persecuted. (41) The women, [who actively participated in the demonstration], were persecuted. The semantic relation holding between the NRRC and its head gives rise to a number of syntactic properties and constraints. One of these concerns the head, which must be specific, i.e., presupposed. While cross-​linguistic variation is attested regarding the presence of complementizers or zero particles introducing NRRCs, wh-​relative determiners seem to be cross-​linguistically allowed. The fact that both pied-​piping of a preposition by the wh-​determiner and heavy pied-​piping are allowed suggests that the wh-​element moves towards the left edge of the RC. NRRCs do not allow stacking in the form of embedding one RC inside another. A further peculiar feature of NRRCs is the optional presence, together with an external head, of an NP internal to the RC which is co-​referent with the external head, as in (42), where city is co-​referent with the external head Paris. (42) Paris, [a city that has beautiful buildings], is loved by all tourists. Finally, some languages prosodically mark NRRCs by separating the RC by means of intonation contours and pauses. This, together with the fact that the head cannot be reconstructed in a position internal to the RC, suggests that the RC does not form a constituent with the head and that the latter is base-​generated outside the RC. NRRCs have been identified in the following sign languages:  DGS EHRCs (Happ & Vorköper 2006); LIS correlatives (Cecchetto et  al. 2006)25 and EHRCs (Branchini & Mantovan 2015; Branchini 2017); TİD EHRCs (Kubuş 2016). Except for Cecchetto et al.’s analysis of LIS correlatives, common features characterizing NRRCs across sign languages are: (i) the dependent status of the RC; (ii) lack of the determiner marking RRCs; (iii) lack of the non-​manuals marking RRCs, in place of which different NMMs may be used; (iv) prosodic cues marking the RC boundaries, corresponding to the cues that mark intonational phrases in ISL and ASL, that is, head nod, pauses, and, optionally, eye blink.

341

342

Chiara Branchini

Branchini & Mantovan (2015) and Branchini (2017) compare the production of three LIS constructions which superficially appear very similar:  a coordination of clauses (43a), a RRC (43b), and a NRRC (43c).26                      

(43) a.

eb/​hn

M ARIA ROME CITY K N OW NOT   ARRIVE LATE

‘Maria doesn’t know the city of Rome and arrives late.’      

b.

                  rel  eb/​hn

[ MANi CITY ROME K N OW N OT PE i]  

    ARRIVE LATE ‘The man who doesn’t know the city of Rome arrives late.’       eb/​hn                 

c.

  eb/​hn

MARIA  [CITY ROME K N OW N OT]   ARRIVE LATE

‘Maria, who doesn’t know the city of Rome, arrives late.’   (LIS, Branchini 2017) Notwithstanding the presence and distribution of almost the same manual signs, the three sentences differ in the distribution of the NMMs and pauses. More specifically, the prosodic cues marking intonational phrase boundaries, eye blink (‘eb’) and head nod (‘hn’), are only present between the two conjoined clauses and between the RC and main clause of RRCs, while they mark both RC boundaries in NRRCs. Furthermore, the analysis of the signing pauses shows that NRRCs pattern with RRCs in the length of the signing pause between the RC and the main clause, as opposed to a longer signing pause between the two independent conjoined clauses. According to the authors, together with the impossibility of producing the NRRC in isolation (as is the case for the RRC in (43b)), this evidence points towards the dependent status of LIS NRRCs. The authors also discuss evidence pointing toward the external position of the head in NRRCs: (i) prosodic cues separate the head from the RC; (ii) the signing pause between the head and the RC in NRRCs is much longer than the signing pause between the head and the following sign in restrictive IHRCs; (iii) a time adverbial referring to the predicate of the RC follows the head. Time adverbials always mark the sentence-​initial boundary in LIS. Finally, the authors report no evidence for the nominal status of the NRRC, as it lacks the relative determiner sign PE supposed to endow the RC with nominal features, and it cannot be modified by ordinals together with the NP head.

15.3.2.2  Some diagnostic tests for NRRCs In the literature on spoken languages, the non-​restrictive interpretation of RCs has been tested against a battery of diagnostic tests illustrated in Section 15.3.1.2 for RRCs. Given that RRCs and NRRCs are opposites, a non-​restrictive interpretation is obtained if the tests proposed for RRCs yield opposite results. As for the diagnostics that allow for a distinction between NRRCs and independent discourse sentences, Grosu (2002) observes that (i)  NRRCs are dependent clauses while parentheticals are independent clauses; (ii) while NRRCs may contain a variable (the relative pronoun who in (44c)) that must be bound by a linguistic antecedent within the sentence (singer in (44c)), independent sentences may contain a variable (the pronoun she in (44a)) that can also refer to a referent not present in the discourse context. The ungrammaticality of the NRRC in (44b) is due to the lack of a linguistic antecedent within the sentence able to bind the variable who.

342

343

Relative clauses

(44) a. The house collapsed; she ran away terrified. b. * The house collapsed, [who ran away terrified]. c. The singeri, whoi we admired, ran away terrified.        (Grosu 2002: 146) Observation (i)  can be verified by testing the possibility of producing the RC in isolation:  if it cannot be produced on its own with the relevant non-​manuals spreading over it and marking its boundaries, then it is likely to be a RC.27 Observation (ii) can be tested through grammatical judgments verifying the possibility of variable binding with linguistic antecedents inside and outside the complex sentence. While the variable (subsuming null pronominals) inside the NRRC requires a linguistic antecedent within the complex sentence, the variable contained in independent discourse sentences can be bound by a sentence-​external referent. Branchini (2017) shows that, as opposed to LIS parentheticals (45b), LIS NRRCs cannot contain a variable, the null pronoun pro in (45a), referring to a referent external to the discourse context. eb/​hn                          

(45) a.

eb/​hn

LUCAi,    [ proi/​*k S ITUATION BE-​U NAWARE] ,

PARTY ORGANIZE

‘Lucai, whoi/​*k is unaware of the situation, organized a party.’ eb/​hn                        

b.

LUC A i,

eb/​hn

IX i/​k S ITUATION BE-​U NAWARE,

PARTY ORGANIZE

‘Lucai, hei/​k is unaware of the situation, organized a party.’  (LIS, Branchini 2017) Furthermore, Branchini (2017) signals different ordering restrictions for NRRCs as compared to parentheticals in LIS. Following Ross (1984), there are adjacency restrictions between the head and NRRCs (46a, within brackets); this contrasts with a flexible order for parentheticals (46b, within parentheses). Furthermore, while LIS NRRCs must follow LIS RRCs, parentheticals may either precede or follow them (47). eb/​hn   

(46) a.

        eb/​hn

[*I KNOW WELL], MARIO,  [I K N OW WELL] ,  



COMPETITION RUN

W IN C ERTAIN LY [*I K N OW WELL]

‘Mario, whom I know well, will certainly win the competition.’ b.

(EV ERYON E K N OW) CAR WH ITE (EVERYONE KNOW) GET-​D IRTY EASILY ( EV ERYON E K N OW)

‘(Everyone knows), white cars, (everyone knows), get dirty easily, (everyone knows).’        (LIS, Branchini 2017)                      

(47) a.

[ *LUC A K N OW N OT]  

            rel eb/​hn          [MARIA MUSEUMi VISIT PE i],  



eb/​hn

[ LUCA KNOW NOT] ,

YE STE RDAY COLLAPS ED

‘The museum that Maria visited, that Luca doesn’t know, collapsed yesterday.’                      

b.

            rel

(EVERYONE KNOW)   [MARIA HOUSEi BUY PE i] (EVERYONE KNOW) EXPENSIVE ‘ (Everyone

knows), the house that Maria bought, (everyone knows), is

expensive.’                                 (LIS, Branchini 2017)

343

344

Chiara Branchini

15.3.2.3  Theoretical accounts of NRRCs Different proposals have been advanced in the literature of spoken languages for the structural derivation of NRRCs.28 None of them seems, however, to have gathered a large consensus and to be based on unequivocally strong linguistic data. Theoretical accounts trying to represent NRRCs in sign languages have only been proposed for LIS correlatives (Cecchetto et  al. 2006) and EHRCs (Branchini 2017). Cecchetto et al. (2006), in their analysis of what they take to be LIS correlatives (Section 15.2.4.3), claim that the movement of the demonstrative PE from an adnominal position to SpecCP is semantically motivated by the need to connect the relative and the matrix CP. From its derived position, PE scopes over the RC and takes it as its argument. Moreover, the pronoun in the main clause is analyzed as an e-​type anaphora yielding a non-​restrictive interpretation of the RC, which is interpreted as a subject-​predicate structure. As for LIS EHRCs, Branchini (2017) proposes that the NRRC is adjoined to the external NP head.

15.4  Topics and relative clauses When looking at the examples of relative constructions provided in the studies on sign languages, it becomes evident that RCs are commonly displaced from their base position, a phenomenon that mainly concerns IHRCs (48a) and FRs (48b) and, to a lesser degree, EHRCs (48c). Cross-​linguistically, such displacement preferably targets the left periphery of the sentence.29         rel/​top

(48) a.

[ IX a BOY RU N ] i IX 1 ti K N OW

‘The boy that is running, I know (him).’             (HKSL, adapted from Tang & Lau 2012: 359)                 rel

b.

[ PAOLO LIK E WH ICH ] i IX 1 ti S EE D ONE

‘I saw which Paolo likes.’ 

(LIS, adapted from Branchini 2012)

                          br

c.

BOOK [RPRO-​N H 3 POS S 1 FATH ER READ] IX 1 ti KNOW

‘I know the book which my father is reading.’           (DGS, adapted from Pfau & Steinbach 2005: 515) For many sign languages, a direct relation between sentence-​initial RCs and topics has been drawn, based on the fact that they are accompanied by similar NMMs, the main component being raised eyebrows. For instance, Tang & Lau (2012) for HKSL and Liddell (1978) for ASL claim that, when occurring sentence-​initially, IHRCs are marked by raised eyebrows, the same NMM that also accompanies topics, to the extent that it is difficult to assert the presence of an independent topic NMM (48a). In her PhD thesis, Dachkovsky (2018) focuses on the linguistic production of RCs by three groups of ISL signers (old, middle age, and young). Interestingly, she describes the grammaticalization process that two intonational components (forward head movement and squinted eyes) undergo from being markers of information structure to being reanalyzed and employed as markers of subordination in relative clause constructions, thus further tightening the syntactic bond between relativization and information structure. 344

345

Relative clauses

Indeed, displacement of subordinate clauses is frequently observed in sign languages. In previous studies on subordination (cf. Cecchetto et al. (2006), Geraci, Cecchetto et al. (2008), and Geraci & Aristodemo (2016) for LIS; Quer (2012) for LSC, among others), it has been pointed out that sign languages tend to avoid center embedding, a constraint that has been tentatively linked to working memory limitations (cf. Geraci, Gozzi, et al. 2008; Geraci et al. 2010). Different proposals suggest that the landing site of the moved RC is the specifier of a topic phrase. If we follow Dachkovsky & Sandler (2009), who propose that RRCs share with some topics the semantic import of retrieving shared information, then it is no surprise that, among the positions available within the left periphery, the topic projection is selected. This line of reasoning directly accounts for the fact that non-​restrictive EHRCs are, to date, reported to occur in situ. It should also be noted that in the literature on sign languages, raised eyebrows have been associated with A’ dependencies, namely with movement to SpecCP (cf. Wilbur & Patschke 1999; Neidle et al. 2000). Following Aaron’s (1994) and Sze’s (2013) proposal that different topic positions are available in ASL and HKSL, respectively, a challenge for future research will be to verify the position targeted by RRCs when dislocated to sentence-initial position. The assumption is that the choice of position is not random, but rather strictly determined by the syntactic and semantic function associated with the targeted topic position and, crucially, shared by the dislocated RC.

15.5  Conclusions Investigations on relative constructions in sign languages reveal that sign languages display the same parametric variation in this domain as observed for spoken languages. However, to date, prenominal EHRCs are not attested.30 Common features of relativization in sign languages are: (i) non-​manual and prosodic markers signaling the RC; (ii) (optional) presence of a determiner within IHRCs; (iii) presence of a relative pronoun or of a zero marking strategy in EHRCs; (iii) presence of a demonstrative in the main clause of correlatives. Interestingly, sign languages might provide further evidence for the claim that the nominal status of headed RCs is derived differently in IHRCs and EHRCs: from selection of an external determiner forcing the head to raise to an external position in EHRCs; from the presence of a clause-​internal strong determiner in IHRCs. The syntactic types also differ in the position of the RC: EHRCs preferably occupy an in situ position, while IHRCs, FRs, and correlatives appear to the left of the main clause. As for the semantic types, the nominalizing determiner and squinted eyes are only found in RRCs. This is in line with the semantics of NRRCs, in particular the fact that NRRCs do not unequivocally identify the head and, most probably, lack nominal features. While squinted eyes are observed in all restrictive IHRCs and in FRs, the analysis of the NMMs accompanying restrictive EHRCs is rather sketchy, the only observation being that the RC (but not the external head) of LIS and TİD restrictive EHRCs is also marked by squinted eyes. More cross-​linguistic research in this domain is needed in order to be able to generalize the proposal that squinted eyes is a marker of restrictivity in sign languages. Moreover, the behavior of prosodic markers in NRRCs clearly sets them apart from RRCs. Their dependent status provides further evidence against an analysis as parenthetical clauses (Grosu 2002). This area, too, requires more cross-​linguistic research.

345

346

Chiara Branchini

Finally, as observed for spoken languages (de Vries 2002), IHRCs cannot be interpreted as non-​restrictive. The analysis of relativization needs to be extended to other sign languages. Some topics that await further investigation are: (i) FRs in other sign languages, in order to verify cross-​linguistic variation within this syntactic type; (ii) NRRCs, in order to gain a better understanding of this semantic type; (iii) the role the NMMs squinted eyes and brow raise play in marking restrictivity and movement to the left periphery; (iv) the correlation between topics and RCs and the specific position the latter target when fronted.

Notes 1 In the literature, this NMM is also referred to as ‘tensed eyes’ or ‘tensed cheeks’. 2 See, for instance, Cole (1987), Williamson (1987), and Culy (1990). 3 See Wilbur (2017) for a review on IHRCs in sign languages. 4 See also Galloway (2012, 2013) for an analysis of ASL RCs as IHRCs. 5 For reasons of space, we cannot provide the complete paradigms of nominalizing elements for each sign language. The reader is referred to Fischer & Johnson (2012) and Koulidobrova (2011) for discussion on ASL SE L F . 6 All examples in this section are taken from LIS (Branchini 2014). 7 The reader is referred to Kayne (1994) and Bianchi (1999) for a derivation of IHRCs in spoken languages within the raising analysis. 8 According to Pfau & Steinbach (2005), DGS features two relative pronouns that agree with the head noun in the feature [± human]; the pronoun R P RO -​H in (17) is the one used with human referents. 9 Bertone (2007) and Brunelli (2011) for LIS and Kubuş (2016) for TİD report that restrictive RCs are also marked by squinted eyes. 10 Dislocation of the RC sometimes involves the presence of different NMMs and/​or indexical signs (cf. Liddell 1978, 1980; Pfau & Steinbach 2005). 11 See Kayne (1994) and Bianchi (1999) for proposals advanced in the literature of spoken languages to derive EHRCs within the raising analysis. 12 Kubuş (2016) reports that a few of the TİD RCs he analyzes lack an overt head. However, he casts some doubts on their genuine syntactic nature as FRs. 13 See Caponigro & Davidson (2011) for a different interpretation of ASL pseudoclefts. 14 Studies on ASL pseudoclefts report the presence of the NMM brow raise over the sentence-​ initial wh-​constituent and claim the absence of free relative clauses in ASL. 15 The English verb ‘to know’ has two different translations in Italian and LIS: one is the verb sapere ‘to know something’ and the other one is conoscere ‘to be acquainted with somebody’. In the LIS sentence in (28), the second verb, only selecting a DP complement, is intended. 16 The reader is referred to Donati (2000, 2006) for a proposal to derive FRs in spoken languages. 17 See Bhatt (2003) for a proposal that the left-​adjoined position of (cor)relative clauses is derived through movement. 18 See Branchini & Donati (2009) and Branchini (2014) for arguments in favor of analyzing the same constructions as IHRCs. 19 See Cecchetto et al. (2009) for the proposal of SpecCP being on the right in LIS. 20 The reader is referred to Dayal (1991) and Bhatt (2003), among others, for proposals concerning the derivation of correlatives advanced in the literature on spoken languages. 21 The authors do not discuss the syntactic type(s) of ISL RCs. 22 Although not discussed explicitly, the translations provided for HKSL and Libras RCs also seem to suggest a restrictive interpretation. 23 Presence vs. lack of an eye blink between the head and the RC might turn out to be a further diagnostic for distinguishing EHRCs from IHRCs.

346

347

Relative clauses 2 4 All examples reported in this section are from LIS (Branchini 2014). 25 See Branchini & Donati (2009) and Branchini (2014) for an analysis of the same construction as RRCs. 26 For the sake of simplicity, the NMM for negation is neglected in (43). 27 Given the prodrop nature of sign languages, sentences can display a null subject. However, in order to be judged acceptable, even parentheticals need to refer to a referent already introduced in the discourse context. 28 Among the proposals advanced to syntactically represent NRRCs are the coordinate analysis (Emonds 1979), the discontinuous constituent structure analysis (McCawley 1982; a.o.), the radical orphanage analysis (Safir 1986, a.o.), and the raising analysis (Kayne 1994; Bianchi 1999; a.o.). 29 The literature on relative clauses reports that dislocation of the RC is also attested cross-​linguistically in spoken languages with a preference for sentence-​initial movement for IHRCs and extraposition for EHRCs. See Cole (1987), Williamson (1987), Culy (1990), Bonneau (1992), Cole & Hermon (1994), and Basilico (1996) for some discussion on the displacement of IHRCs in spoken languages. 30 However, Wilbur (2017) notes that Ichida (2010) claims that Japanese Sign Language displays both prenominal and postnominal EHRCs. Since, to date, RCs have only been investigated for a few sign languages, further studies are needed before being able to express a generalization in this direction.

References Aaron, Debra. 1994. Aspects of the syntax of American Sign Language. Boston: Boston University PhD dissertation. Alexiadou, Artemis, Paul Law, André Meinunger, & Chris Wilder (eds.). 2000. The syntax of relative clauses. Amsterdam: John Benjamins. Baker, Charlotte & Carol Padden. 1978. Focusing on the non-​manual components of ASL. In Patricia Siple (ed.), Understanding language through sign language research, 27–​57. New York: Academic Press. Basilico, David. 1996. Head position and internally headed relative clauses. Language 72. 498–​532. Bertone, Carmela. 2007. La struttura del sintagma determinante nella Lingua dei Segni Italiana (LIS). Venezia: Università Ca’ Foscari Venezia PhD dissertation. Bhatt, Rajesh. 2003. Locality in correlatives. Natural Languages and Linguistic Theory 21(3). 485–​541. Bianchi, Valentina. 1999. Consequences of antisymmetry: Headed relative clauses. Berlin: Mouton de Gruyter. Bonneau, José. 1992. The structure of internally headed relative clauses. Montreal:  McGill University PhD dissertation. Branchini, Chiara. 2009. Relative libere e interrogative Wh-​in LIS. In Carmela Bertone & Anna Cardinaletti (eds.) Atti delle giornate di studi sulla grammatica della lingua dei segni italiana, 101–​115. Venezia: Edizioni Cafoscarina, Atti 9. Branchini, Chiara. 2012. Disentangling free relative clauses in Italian Sign Language. Poster presented at Formal and Experimental Advances in Sign Language Theory (FEAST 2), Warsaw University. Branchini, Chiara. 2014. On relativization and clefting. An analysis of Italian Sign Language (LIS). Berlin: De Gruyter Mouton. Branchini, Chiara. 2017. Digging up the core features of (non)restrictiveness in sign languages’ relative constructions. Talk presented at Formal and Experimental Advances in Sign Language Theory (FEAST 6), University of Iceland, Reykjavik. Branchini Chiara & Caterina Donati. 2009. Relatively different:  Italian Sign Language relative clauses in a typological perspective. In Anikó Lipták (ed.) Correlatives cross-​linguistically, 157–​191. Amsterdam: John Benjamins. Branchini, Chiara & Lara Mantovan. 2015. In search for non-​restrictive relative clauses in Italian Sign Language (LIS). Talk presented at the 1st Meeting on the Morphosyntax of Portuguese Sign Language and Other Sign Languages, Porto. Brunelli, Michele. 2011. Antisymmetry and sign languages: A comparison between NGT and LIS. Amsterdam: University of Amsterdam PhD dissertation. Utrecht: LOT.

347

348

Chiara Branchini Caponigro, Ivano & Kathryn Davidson. 2011. Ask, and tell as well: Questions-​answer clauses in American Sign Language. Natural Language Semantics 19. 323–​371. Cecchetto, Carlo, Carlo Geraci, & Sandro Zucchi. 2006. Strategies of relativization in Italian Sign Language. Natural Language and Linguistic Theory 24. 945–​975. Cecchetto, Carlo, Carlo Geraci, & Sandro Zucchi. 2009. Another way to mark syntactic dependencies. The case for right peripheral specifiers in sign languages. Language 85. 278–​320. Cole, Peter. 1987. The structure of internally headed relative clauses. Natural Language and Linguistic Theory 5. 277–​302. Cole, Peter & Gabriella Hermon. 1994. Is there LF wh-​movement? Linguistics Inquiry 25. 239–​262. Coulter, Geoffrey R. 1983. A conjoined analysis of American Sign Language relative clauses. Discourse Processes 6. 305–​318. Culy, Christopher. 1990. The syntax and semantics of internally headed relative clauses. Stanford: Stanford University PhD dissertation. Dachkovsky, Svetlana. 2018. Grammaticalization of intonation in Israeli Sign Language:  From information structure to relative clause relations. Haifa: University of Haifa PhD dissertation. Dachkovsky, Svetlana & Wendy Sandler. 2009. Visual intonation in the prosody of a sign language. Language and Speech 52 (2/​3). 287–​314. Dayal, Veneeta Srivastav. 1991. The syntax and semantics of correlatives. Natural Languages and Linguistic Theory 9. 637–​686. Dermidache, Hamida. 1991. Resumptive chains in restrictive relatives, appositives and dislocation structures. Cambridge, MA: Massachusetts Institute of Technology PhD dissertation. Donati, Caterina. 2000. La sintassi della comparazione. Padova: Unipress. Donati, Caterina. 2006. On wh-​ head-​ movement. In Lisa Cheng & Norbert Corver (eds.), Wh-​movement moving, 21–​46. Cambridge, MA: MIT Press. Emonds, Joseph. 1976. A transformational approach to English syntax. New York: Academic Press. Emonds, Joseph. 1979. Appositive relative clauses have no properties. Linguistic Inquiry 11. 337–​362. Fischer, Susan & Robert Johnson. 2012. Nominal markers in ASL. Sign Language & Linguistics 15(2). 243–​250. Fontana, Josep M. 1990. Is ASL like Diegueño or Diegueño like ASL? A study of internally headed relative clauses in ASL. In Ceil Lucas (ed.), Sign language research. Theoretical issues, 238–​255. Washington, DC: Gallaudet University Press. Galloway, Teresa. 2012. Distinguishing correlatives from internally headed relative clauses in ASL. Paper presented at Semantics of Under-​represented Languages (SULA) 7, Cornell University. Galloway, Teresa. 2013. Internally headed relative clauses in American Sign Language. Poster presented at Annual Meeting of the Linguistic Society of America (LSA 87), Boston University. Geraci, Carlo & Valentina Aristodemo. 2016. An in-​depth tour into sentential complementation in Italian Sign Language. In Roland Pfau, Markus Steinbach, & Annika Hermann (eds.), A matter of complexity. Subordination in sign languages, 95–​150. Berlin: De Gruyter Mouton. Geraci, Carlo, Carlo Cecchetto, & Costanza Papagno. 2010. Remembering phonologically in a language without sounds. In Robert V. Nata (ed.), Progress in education 20, 141–​151. Hauppauge: Nova Science Publishers. Geraci, Carlo, Carlo Cecchetto, & Sandro Zucchi. 2008. Sentential complementation in Italian Sign Language. In Michael Grosvald & Dionne Soares (eds.), Proceedings of the 38th Western Conference on Linguistics, 46–​58. Davis: Department of Linguistics University of California. Geraci, Carlo, Marta Gozzi, Costanza Papagno, & Carlo Cecchetto. 2008. How grammar can cope with limited short-​term memory: Simultaneity and seriality in sign languages. Cognition 106. 780–​804. Grolla, Elaine. 2004. Clausal equations in American Sign Language. Poster presented at the 8th International Conference on Theoretical Issues in Sign Language Research (TISLR 8), Universitat de Barcelona, Barcelona. Grosu, Alexander. 2002. Strange relatives at the interface of two millennia. Glot International 6. 145–​167. Grosu, Alexander & Fred Landman. 1998. Strange relatives of the third kind. Natural Language Semantics 6. 125–​170. Happ, Daniela & Marc O. Vorköper. 2006. Deutsche Gebärdensprache. Ein Lehr-​und Arbeitsbuch. Frankfurt: Fachochschulverlag.

348

349

Relative clauses Herrmann, Annika. 2010. The interaction of eye blinks and other prosodic cues in German Sign Language. Sign Language & Linguistics 13(1). 3–​39. Ichida, Yasuhiro. 2010. Introduction to Japanese Sign Language: Iconicity in language. Studies in Language Sciences 9. 3–​32. Kayne, Richard. 1994. The antisymmetry of syntax. Cambridge, MA: MIT Press. Keenan, Edward L. 1985. Relative clauses. In Timothy Shopen (ed.), Language typology and syntactic description 2, 141–​170. Cambridge: Cambridge University Press. Koulidobrova, Helen. 2011. self:  Intensifier and ‘long distance’ effects in ASL. Manuscript, University of Connecticut. Available at: http://​webcapp.ccsu.edu/​u/​faculty/​elenakoulidobrova/​ SELF%20and%20Long%20Distance%20Anaphor%20in%20ASL.%20Ms.%20UConn..pdf (accessed 15 December, 2017). Kubuş, Okan. 2016. Relative clause constructions in Turkish Sign Language. Hamburg: Universität Hamburg PhD dissertation. Kubuş, Okan & Derya Nuhbalaoğlu. 2018. The challenge of marking relative clauses in Turkish Sign Language, Dilbilim Araştirmalari Dergisi 2018/​1, 139–​160. Boğaziçi Üniversitesi Yayınevi, İstanbul. Li, Jia. 2013. Relative constructions in Hong Kong Sign Language. Hong Kong:  The Chinese University of Hong Kong PhD dissertation. Liddell, Scott. 1978. Non-​manual signals and relative clauses in ASL. In Patricia Siple (ed.), Understanding language through sign language research, 59–​90. New York: Academic Press. Liddell, Scott. 1980. American Sign Language syntax. The Hague: Mouton. McCawley, James D. 1982. Parentheticals and discontinuous constituent structure. Linguistic Inquiry 13. 91–​106. Mosella, Marta. 2011. The position of fronted and postposed relative clauses in Catalan Sign Language (LSC). Paper presented at Formal and Experimental Advances in Sign Language Theory (FEAST 1). Università Ca’ Foscari, Venice. Mosella, Marta. 2012. Les construccions relatives en Llengua de Signes Catalana (LSC). Barcelona: Universitat de Barcelona PhD dissertation. Munro, Pamela. 1976. Mojave syntax. New York: Garland. Neidle, Carol. 2002. Language across modalities: ASL focus and question constructions. Linguistic Variation Yearbook 2, 71–​98. Amsterdam: John Benjamins. Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Ben Bahan, & Robert Lee. 2000. The syntax of American Sign Language:  Functional categories and hierarchical structure. Cambridge, MA: MIT Press. Nespor, Marina & Wendy Sandler. 1999. Prosody in Israeli Sign Language. Language & Speech 42(2&3). 143–​176. Nespor, Marina & Irene Vogel. 1986. Prosodic phonology. Berlin: Mouton de Gruyter. Nunes, Jairo & Ronice Mueller de Quadros. 2004. Phonetic realization of multiple copies in Brazilian Sign Language. Paper presented at the 8th Conference on Theoretical Issues in Sign Language Research (TISLR 8), Barcelona. Ogle, Richard. 1974. Natural order and dislocated syntax. Los Angeles, CA:  UCLA PhD dissertation. Partee, Barbara. 1976[1973]. Some transformational extensions of Montague grammar. In Barbara Partee (ed.), Montague Grammar, 51–​76. New York: Academic Press. Petronio, Karen. 1991. A focus position in ASL. In Jonathan Bobaljik & Toni Bures (eds.), Papers from the third student conference in linguistics (MIT Working Papers in Linguistics 14), 211–​225. Cambridge, MA: MITWPL. Pfau, Roland & Markus Steinbach. 2005. Relative clauses in German Sign Language: Extraposition and reconstruction. In Leah Bateman & Cherlon Ussery (eds.), Proceedings of the North East Linguistic Society (NELS 35) vol. 2, 507–​521. Amherst: GLSA. Pfau, Roland & Markus Steinbach. 2016. Complex sentences in sign languages: Modality –​typology –​discourse. In Roland Pfau, Markus Steinbach, & Annika Herrmann (eds.), A matter of complexity: Subordination in sign languages, 1–​35. Berlin: De Gruyter Mouton. Quer, Josep. 2012. A modality-​free account of the position of clausal arguments. Paper presented at Formal and Experimental Advances in Sign Language Theory (FEAST 2), University of Warsaw. Ross, John R. 1967. Constraint on variables in syntax. Cambridge:  Massachusetts Institute of Technology PhD dissertation.

349

350

Chiara Branchini Ross, John R. 1984. Inner islands. In Claudia Brugman, Monica Maccaulay, Amy Dahlstrom, Michele Emanatian, Birch Moonwoman, & Catherine O’Connor (eds.), Control and command in Tzotzil purpose clauses, 258–​265. Berkeley: Berkeley Ling. Soc. of California, xvii, 698. Safir, Ken. 1986. Relative clause in a theory of binding and levels. Linguistic Inquiry 17. 663–​689. Sandler, Wendy. 1999. The medium and the message: Prosodic interpretation of linguistic content in Israeli Sign Language. Sign Language & Linguistics 2. 187–​216. Sandler, Wendy. 2006. Phonology, phonetics, and the non-​dominant hand. In Louis Goldstein, Douglas H. Whalen, & Catherine Best (eds.), Papers in laboratory phonology: Varieties of phonological competence, 185–​212. Berlin: Mouton de Gruyter. Selkirk, Elizabeth. 2005. Comments on intonational phrasing in English. In Sonja Frota, Marina Vigario, & Maria João Freitas (eds.), Prosodies:  With special reference to Iberian languages, 11–​58. Berlin: Mouton de Gruyter. Sells, Peter. 1985. Restrictive and non-​restrictive modification. CSLI Report No. CSLI-​85-​28. Stanford, California. Shimoyama, Junko. 1999. Internally headed relative clauses in Japanese and E-​type anaphora. Journal of East Asian Linguistics 8. 147–​182. Sze, Felix. 2003. Word order of Hong Kong Sign Language. In Anne Baker, Beppie van den Bogaerde, & Onno Crasborn (eds.), Cross-​linguistic perspectives in sign language research: Selected papers from TISLR 2000, 163–​192. Hamburg: Signum. Sze, Felix. 2008a. Blinks and intonational phrasing in Hong Kong Sign Language. In Josep Quer (ed.), Signs of the time: Selected papers from TISLR 2004, 83–​107. Hamburg: Signum. Sze, Felix. 2008b. Topic constructions in Hong Kong Sign Language. Bristol: University of Bristol PhD dissertation. Sze, Felix. 2013. Non-​manual markings for topic constructions in Hong Kong Sign Language. In Annika Herrmann & Markus Steinbach (eds.), Nonmanuals in sign language, 111–​142. Amsterdam: John Benjamins. Tang, Gladys & Prudence Lau. 2012. Coordination and subordination. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign Language: An international handbook (HSK –​Handbooks of linguistics and communication science), 340–​365. Berlin: De Gruyter Mouton. Tang, Gladys, Prudence Lau, & Jafi Lee. 2010. Strategies of relativization in Hong Kong Sign Language. Paper presented at Theoretical Issues in Sign Language Research (TISLR 10), Purdue University, West Lafayette, IN. Thompson, Henry. 1977. The lack of subordination in American Sign Language. In Lynn Friedman (ed.), On the other hand:  New perspectives on American Sign Language, 78–​94. New  York: Academic Press. Vries, Mark de. 2002. The syntax of relativization. Amsterdam:  University of Amsterdam PhD dissertation. Utrecht: LOT. Watanabe, Akira. 1992. WH in situ, subjacency and chain formation. Cambridge, MA: MIT Working Papers in Linguistics. Wilbur, Ronnie. 1994. Foregrounding structures in American Sign Language. Journal of Pragmatics 22. 647–​672. Wilbur, Ronnie. 1996. Evidence for the function and structure of wh-​clefts in American Sign Language. International Review of Sign Linguistics 22. 209–​256. Wilbur, Ronnie. 2000. Phonological and prosodic layering of non-​manuals in American Sign Language. In Harlan Lane & Karen Emmorey (eds), The signs of language revisited: Festschrift for Ursula Bellugi and Edward Klima, 213–​241. Hillsdale, NJ: Erlbaum. Wilbur, Ronnie. 2017. Internally-​headed relative clauses in sign languages. Glossa:  A Journal of General Linguistics 2(1): 25. 1–​34. Wilbur, Ronnie & Cynthia Patschke. 1999. Syntactic correlates of brow raise in ASL. Sign Language & Linguistics 2(1). 3–​41. Williamson, Janis S. 1987. An indefiniteness restriction for relative clauses in Lakhota. In Eric Reuland & Alice ter Meulen (eds.), The representation of (in)definiteness, 168–​190. Cambridge, MA: MIT Press. Zhang, Niina. 2001. Sell non-​ restrictive relative clauses. Manuscript, National Chung Cheng University.

350

351

16 ROLE SHIFT Theoretical perspectives Markus Steinbach

16.1  Introduction Sign languages have a powerful device to report utterances, thoughts, and actions of other individuals by shifting into the perspective of the reported individual and depicting the linguistic and non-​linguistic actions of this individual. This device has many names, two prominent ones are role shift and constructed action. Role shift has long been in the focus of research on sign languages and is equally well investigated as, for instance, agreement and classifier constructions. While in many fields, sign language linguistics is influenced and guided by research on spoken languages, research on role shift in sign languages is an example of the opposite influence. As we will see below, recent research on quotational constructions in spoken languages is influenced by research on role shift in sign languages. One prominent example are be-​like constructions, which can also be used to report utterances, thoughts, and actions and which are subject to similar restrictions as role shift. Role shift, like agreement and classifier constructions, is both, a well-​described and a hotly debated phenomenon. While formal linguists have been more interested in the quotational function of role shift and the question whether role shift is an instance of direct or indirect speech, cognitive linguists have highlighted the demonstrative function of role shift and the gestural components used for constructed actions. An interesting recent development is that new theories try to combine both aspects of the analysis of role shift and define more general theories integrating and combining aspects of previous formal and cognitive approaches. Additionally, the question of modality and the limits of a modality-​independent theory of reported speech and reported action has become a central research question. What is still in the beginning are broad representative empirical investigations of role shift in many different spoken and sign languages including experimental and corpus studies as well as studies of the acquisition of role shift (for the latter, see Emmorey & Reilly 1998; Lillo-​Martin & Quadros 2011; Köder 2016, 2018). Not long ago, a handbook chapter on role shift in sign languages appeared: the chapter by Lillo-​Martin (2012) gives a comprehensive overview of different kinds of role shift,

351

352

Markus Steinbach

similar quotative constructions in spoken languages, and former cognitive and formal analysis of role shift. A  descriptive overview of role shift from the perspective of reference grammar writing is given in Quer et al. (2017). Problems of annotation of role shift are discussed in Cormier et al. (2015). In this chapter, overlap with these works is avoided to the extent possible. Therefore, the chapter focuses on new empirical data and latest analyses of role shift in the framework of theoretical linguistics, especially formal semantics and pragmatics. In addition, recent developments at the interface between gesture and (sign) language are integrated wherever possible. This chapter is organized as follows: in Section 16.2, I briefly discuss role shift in the context of indirect speech since role shift is one prominent modality-​specific construction of sentential complementation. Section 16.3 introduces a descriptive terminological distinction between two different kinds of role shift relevant for the following discussion, namely attitude role shift and action role shift. Section 16.4 turns to morphosyntactic and prosodic aspects of role shift, non-​manual markers, and point-​of-​view operators and sketches a syntactic analysis of role shift in terms of sentential embedding. The following two subsections are devoted to different semantic and pragmatic aspects of role shift. In Section 16.5, we first turn to the interpretation of indexicals in role shift before we discuss the influence of demonstrational theories on the formal analysis of role shift in Section 16.6. The expressive power of role shift in narration and the possibility to mix perspectives in role shift is the topic of Section 16.7. Finally, Section 16.8 concludes this chapter and discusses some modality-​specific and modality-​independent aspects of role shift. For the illustration of various properties of role shift in sign languages, I use selected examples from the Göttingen DGS fable corpus (Herrmann & Pendzich 2018) in addition to well-​known examples discussed in the literature.

16.2  Role shift and sentential complementation At first sight, role shift in sign languages has two different faces: on the one hand, it looks like a modality-​specific device of reported speech. On the other hand, it is a much more powerful means to reproduce the linguistic as well as the non-​linguistic actions of another person. The latter function is often referred to as constructed action (I come back to terminological issues in the next section). In this section, I start with a brief discussion of the quotational function of role shift before I broaden the perspective and give a more general sketch of different functions of role shift in sign language in the subsequent sections. In formal linguistics, role shift is typically discussed in the context of sentential complementation. The Catalan Sign Language (LSC) examples in (1) discussed in Quer (2016) illustrate that the role shift in (1b) is functionally equivalent to the indirect speech counterpart in (1a) with standard sentential complementation.1 Unlike indirect speech, which involves sentential embedding, role shift is marked with specific non-​manual markers (glossed as ‘rs’ in (1b), cf. Section 16.4) and typically triggers a shifted interpretation of indexical expressions such as first and second person pronouns. As a consequence, the first person indexical in (1b) does not refer to the actual signer (i.e., the reporter of Anna’s utterance) but to Anna, the signer of the reported speech. By contrast, in the indirect speech report in (1a), the first person indexical would refer to the actual signer. Therefore, a third person pronoun must be used in the embedded clause to refer back to Anna anaphorically.2

352

353

Role shift



(1)  a. 



ANNAi 3SAY1 

t IX3i FED-​UP LOSE+++

‘Annai told me that shei was fed up with losing so often.’              

b.

ANNAi

3

rs-​i

SAY2 IX1i FED-​UP LOSE+++

‘Annai told you that shei was fed up with losing so often.’  (LSC, Quer 2016: 207) Like standard cases of reported speech, role shift can be embedded under verbs of saying, asking, thinking, and believing. In this respect, role shift seems to be a typical case of indirect speech involving sentential complementation. However, unlike standard cases of reported speech, role shift can also co-​occur with other kinds of communicative predicates, other discourse-​dependent markers such as, for instance, the name of the reported signer, with elliptical matrix clauses, or even without an overt matrix clause. In this respect, role shift seems to behave more like direct speech (Quer 2016). Schlenker (2017a) applies standard tests to distinguish direct speech from indirect speech such as long extraction, the licensing of negative polarity items, or the interpretation of indexicals to role shift in American Sign Language (ASL) and French Sign Language (LSF). He concludes that these tests “yield a rather complex picture, parts of which are compatible with a quotational analysis” (Schlenker 2017b:  7; cf. also Quer 2013). Role shift thus seems to be a hybrid construction combining properties of both indirect and direct speech. Not surprisingly, the theories which we discuss below typically assume that role shift is an instance of either direct or indirect speech with additional construction specific properties.3 Furthermore, we will see later that the non-​quotational use of role shift in constructed action makes the whole picture even more complex. Role shift has often been compared to specific quotative constructions in spoken languages such as English be-​like (Padden 1986; Lillo-​Martin 1995, 2012). The following two examples show that be-​like constructions, like role shift, can be used to introduce speech reports as in (2a) and gestural demonstrations of actions as in (2b). In addition, a speech report in be-​like constructions, like role shift, is frequently accompanied by vocal as well as manual and non-​manual gestures to demonstrate (or enact) the linguistic and non-​linguistic behavior of the reported person. (2)  a.  I was like, “Who is it?” (Lillo-​Martin 2012: 369)   onset          gesture “sticking card into” b. But then they’re like  “Stick this card into this machine.” (Streeck 2002: 584) The existence of a counterpart in spoken languages raises the question whether we are dealing with one (modality-​independent) phenomenon or whether role shift in sign language is subject to modality-​specific constraints not applicable to spoken languages. I will come back to be-​like constructions and the issue of a unified modality-​independent theory in the context of recent formal analysis of role shift and gestural demonstrations in Section 16.6 below.

353

354

Markus Steinbach

16.3  Attitude and action role shift Terminology is a slippery area in this field. In the literature, many different notions have been proposed to describe different phenomena grouped under what has been called ‘role shift’ by many researchers. A prominent branch of research grounded in cognitive linguistics prefers the notion ‘constructed’ as in constructed action and constructed dialogue to highlight the use of gestural components in and the demonstrational function of role shift in discourse and narration (Metzger 1995; Liddell & Metzger 1998; Dudis 2004; Quinto-​Pozos 2007; Cormier et al. 2013, 2015). Formal linguists usually prefer the notion ‘shift’ as in shifted perspective, perspective shift, role shift, shifted reference, referential shift, reference shift, and body shift to emphasize either the prosodic non-​manual features (physically) marking role shift such as shift of the upper part of the body and change of eye gaze or to refer to the context-​changing potential of role shift including a shifted interpretation of indexicals (Engberg-​Pedersen 1995; Lillo-​Martin 1995; Lee et al. 1997; Janzen 2004; Zucchi 2004; Quer 2005, 2011; Pyers & Senghas 2007; Herrmann & Steinbach 2012; Earis & Cormier 2013; Hübl 2014; Maier 2016, 2018; Schlenker 2017a, 2017b, 2018; Herrmann & Pendzich 2018; Hübl et al. 2019). Other notions used in the literature are role playing, reported action, role taking, and role switching, among others. Since these notions are typically used to emphasize and describe different selected aspects of role shift in the context of different theoretical approaches, they are not fully interchangeable. In this chapter, I follow the terminology proposed in Schlenker (2017a, 2017b) and use the notions attitude role shift (AtRS) and action role shift (AcRS), which combine aspects of cognitive (constructed dialogue and constructed action) and formal (role shift) terminology. The terminological distinction between two different kinds of role shift is a first helpful descriptive step to identify two prototypical functions of role shift, i.e., speech report and action report. It is not meant to presuppose the necessity of two different analyses for these two kinds (or functions) of role shift. I come back to the question of linguistic analysis below (for more discussion, see Schlenker (2017a, 2017b); Quer (2018)). As already mentioned, role shift can be used to reproduce linguistic and non-​linguistic actions of other people. Although we will see below that there is no clear-​cut distinction between these two kinds of role shift, I use this well-​established terminological distinction to refer to these two prototypical kinds of role shift: the notion AtRS is used for the reproduction or demonstration of linguistic material such as signs, phrases, sentences, or sequences of sentences (including prosodic markers) to report utterances, thoughts, or propositional attitudes of a person. Note that AtRS is sometimes also called quotational role shift. By contrast, the notion AcRS is used to refer to the reproduction of actions by the signer to demonstrate or depict actions a person performed in another context. The following example taken from a German Sign Language (DGS) version of the fable The tortoise and the hare gives a nice illustration of both kinds of role shift (Herrmann & Pendzich 2018). AtRS is typically used to quote utterances as can be seen in the examples (3a) and (3c), which are illustrated in the first and last picture of Figure 16.1. The signer clearly shifts the upper part of the body, the head, and the eye gaze to the direction of the referential locus 3b on the contralateral side of the left-​handed signer. This locus has been assigned to the tortoise in the preceding context. Referential body shifts toward the locus of the virtual addressee are typical for AtRS. In addition, the signer often changes the facial expression to gesturally imitate the reported signer, in this example the mocking facial expression of the hare. 354

355

Role shift

(3)

a.

HARE TYPICALLY PAM3b TORTOISE LIKE MOCK3b 

           

 rs

IX2 FUNNY SO CLwe: move

‘The hare typically likes to mock the tortoise: “It’s so funny how you walk”.’                      rs

b.

CLb: tortoise-​imitate-​walking-​of-​tortoise

        

c.

  rs

SLOW CLwe: move

… ‘You walk so slow …’

(DGS, Herrmann & Pendzich 2018: 283)

Figure 16.1  Hare mocking and imitating tortoise (DGS, Herrmann & Pendzich 2018: 283; © John Benjamins, reprinted with permission)

By contrast, AcRS as illustrated in (3b) is used to demonstrate actions. Since sign language and gesture use the same modality, signers can easily use the linguistic articulators for gestural demonstrations of actions as, in this example, the clumsy and slow way of walking of the tortoise, and thus perfectly integrate the gestural demonstration in the linguistic utterance. The example also shows that the signer uses the AtRS to frame the AcRS illustrated in the second, third, and fourth picture of Figure 16.1. As opposed to the AtRS, the AcRS does not involve a body shift towards the referential locus of the tortoise. However, the signer changes the body posture and the facial expression to imitate the (hare imitating the) tortoise. As we will see below, non-​referential changes of body posture and facial expressions are typical markers of AcRS. In AcRS, the signer uses his/​ her body or parts of his/​her body to demonstrate actions performed by another person or character, that is, the body of the signer becomes an integral part of the gestural demonstration and the propositional content of the utterance is mainly or completely realized by a gestural demonstration (Meir et al. 2007). I come back to the function of the body in role shift in more detail in Section 16.6 below. As already mentioned, AtRS is typically used to report the utterances, thoughts, or propositional attitudes of a person or a group of persons, that is, AtRS scopes over linguistic material. However, AtRS can be accompanied by gestural components such as specific facial expressions or behavioral traits of a signer, which can be used to express additional properties of the quoted utterance and the reported signer. The facial expression of the signer in (3a) and (3c), for instance, depicts the mocking hare. As already mentioned in the previous section, AtRS combines properties of direct and indirect speech and cannot only be embedded under verbs of saying and thinking but even under communicative or speech act verbs that do not license indirect speech such as for example, MOCK in the example above, which is used to introduce the complex AtRS including a sequence of AcRS. Note that AtRS is frequently used without any embedding matrix clause as in example (3c). This is possible because, on the one hand, the non-​manual 355

356

Markus Steinbach

marking is usually sufficient to contextually identify speaker and addressee and, on the other hand, the grammatical structure of the AtRS (together with additional gestural markers) clearly marks the speech act of the reported utterance. Therefore, like in direct speech, all sentence types and all kinds of signer oriented expressive expressions can be used in AtRS. By contrast, AcRS is used to demonstrate non-​linguistic actions. AcRS typically includes (or only consists of) manual and non-​manual gestures. Linguistic expressions such as, for instance, certain kinds of classifiers can, however, also occur in the scope of AcRS. In (3b), for instance, the signer uses a body part (limb) classifier to imitate the walking of the tortoise. AcRS is typically used in signed discourse and narrations as an expressive tool to depict the behavior of other persons or characters. Example (3) already shows that there is no clear-​cut distinction between AtRS and AcRS. AtRS can be accompanied with gestural components and AcRS can include linguistic components. We will return to more complex mixed cases below. So far, we can conclude that prototypical cases of AtRS and AcRS have different functional and formal properties: while AtRS is used to report utterances, thoughts, or attitudes and thus includes mainly linguistic material (typically sentences denoting propositions), AcRS is used to report actions and includes mainly gestural demonstrations. Furthermore, both kinds of role shift differ in their non-​manual marking and in the presence and interpretation of indexicals, which typically appear in AtRS (Schlenker 2017a). In the next section, I first discuss the non-​manual marking of AtRS before I turn to the behavior of indexicals in AtRS in Section 16.5.

16.4  Non-​manual marking and point-​of-​view operators The example in (3) above shows nicely that both kinds of role shift are marked non-​ manually. AcRS is usually marked by change in facial expression and body posture to depict (relevant aspects of) the behavior of the reported individual. By contrast, AtRS is not only marked by a change in facial expressions but also involves shifting non-​ manuals. In a corpus study on AtRS in DGS, Herrmann & Steinbach (2012) identified four non-​manual markers of AtRS:  (i) change of eye gaze (eg), (ii) change of head position (hp), (iii) body lean (bl), and (iv) facial expression (fe). All four non-​manuals spread over the whole role shift. In addition, Koulidobrova & Davidson (2015) found an interesting difference in the onset of the non-​manual markers of AtRS in ASL. In case of doxastic verbs such as I M AG I N E , the non-​manuals spread over the matrix predicate. By contrast, with proffering verbs such as S AY , spreading over the matrix predicate is not possible. The onset of the non-​manuals thus does not only depend on the scope of the shifted clause but also on the semantics of the embedding predicate. A second aspect is the syntactic dependency of non-​manuals. Herrmann & Steinbach (2012) argue that the first three non-​manuals, i.e., eye gaze, head position, and body lean, depend on the referential loci the signer and addressee of the reported utterance are linked to. By contrast, (gestural) facial expressions are independent of these loci. Eye gaze, head position, and body lean are thus a kind of discourse-​dependent grammatical agreement marker similar to agreement verbs. Like agreement verbs, role shift depends on referential loci of two arguments. However, unlike agreement verbs, it is not the subject and the object of the clause that control agreement but the signer and addressee of the reported utterance (which typically correspond to the subject and object of the matrix

356

357

Role shift

clause). In addition, agreement in role shift is not realized by the manipulation of lexical manual features of the verb (path movement and orientation) but by the manipulation of independent (suprasegmental) non-​manual features. Example (4) illustrates the pattern of non-​manual agreement in DGS: all three non-​ manuals agree with the locus of the addressee (i.e., the object of the matrix clause), that is, the eye gaze, the head position, and the midsagittal plane of the body are aligned with the referential locus of the addressee, i.e., 3b. In addition, the frontal (or vertical) plane is aligned with the referential locus of the signer, in our ­example  3a. Just like manual verb agreement, non-​manual role shift agreement shows a primacy of ‘object’ agreement (for verb agreement, see Padden 1983; Lillo-​Martin & Meier 2011; Pfau et al. 2018). 3a-bl-3b hp-3b eg-3b

role shift agreement

(4)

HARE3a TYPICALLY PAM3b TORTOISE3b LIKE MOCK3b IX2 FUNNY SO CLb:

verb agreement

move

(DGS)

As a consequence, eye gaze, head position, and body lean interact with the grammatical system.4 Facial expression, on the other hand, is an independent gestural marker that is used to depict non-​linguistic properties of the reported signer (Schlenker 2017b). A third aspect is the frequency (or obligatoriness) of the non-​manual markers. In their corpus study, Herrmann & Steinbach (2012) found that the change of facial expression is the most important non-​manual marker which is almost obligatory, that is, AtRS is at least marked by a non-​linguistic marker gesturally depicting the face of the reported signer. The three linguistic markers, i.e., eye gaze, head position, and body lean, also show a clear pattern: while eye gaze is the most frequent grammatical marker, body lean is the least frequent grammatical marker. A possible explanation might involve economy constraints and the physical interdependence of the three markers. Consider economy first: body lean is obviously the most complex and ‘costly’ non-​manual since it involves a change of the position of the upper part of the body (including the head). By contrast, a change in head position and/​or eye gaze is less costly and equally efficient since both the head and the eyes are productive linguistic markers in sign languages. Eye gaze has, for instance, been argued to be a productive non-​manual marker of different grammatical features, including verb agreement in ASL (Bahan et  al. 2000). Second, the non-​ manual markers show a clear pattern of physical interdependence: a change in posture usually also involves a change in head position which again involves a change in eye gaze. Consequently, a change in body position usually goes along with a change in eye gaze but not vice versa. Note that this pattern of the grammatical agreement markers in role shift confirms again the primacy of ‘object’ agreement since the marker of subject agreement (body lean) is the least frequent and thus the least important one. The distribution of the grammatical non-​manuals can be accounted for by a slightly modified version of the analysis proposed in Quer (2005). This account builds on Lillo-​ Martin’s (1995) syntactic analysis of role shift and a more general proposal for the structure of the higher left periphery made in Speas & Tenny (2003).5 Speas & Tenny propose a syntactic implementation of sentence mood in terms of a complex speech act phrase formed by SAP and SA*P as illustrated in (5). Three aspects of this analysis are relevant

357

358

Markus Steinbach

for the analysis of role shift in sign languages. The specifier of SAP hosts the pragmatic role signer (i.e., XP), the complement position of SA*° hosts the pragmatic role addressee (i.e., ZP) and the specifier of SA*P hosts the utterance content ‘uc’ (i.e., YP). Quer (2005) analyzes role shift as a kind of indirect speech. In role shift, the shifted clause is a complex SAP embedded as the complement of a speech act predicate in the matrix VP. The head of SA*° hosts a point-​of-​view operator (PVOp), which incorporates into the matrix predicate (for the assumption of a PVOp, see also Lillo-​Martin 1995). Leaving details aside, the structure in (5)  provides the structural configuration necessary to derive the non-​manual marking of the embedded utterance content YP in the specifier of SA*P illustrated in (4) above: the non-​manuals lexically associated with the PVOp scope over YP in the specifier of SA*P similar to other syntactic non-​manuals such as wh-​markers, topic markers, or negation markers (cf. Wilbur & Patschke 1999; Pfau & Quer 2010). At the same time, the realization of the non-​manuals is controlled by referential loci associated with the embedded signer in the specifier of SAP and the embedded addressee in the complement position of SA*P.

VP

(5) 

V

SAP XPsigner

SA’ SA°

SA*P YPuc

SA*’ SA*° PVOp

ZPaddressee

The structure in (5)  allows for a transparent implementation of the similarities and differences between manual verb agreement and non-​manual role shift agreement. In both kinds of agreement, the arguments of the target of agreement (i.e., the verb and the PVOp) occur in the (extended) projection of the agreement target. In addition, the overt realization of agreement depends on the referential loci associated with these arguments. And finally, in both cases, agreement is expressed by a manipulation of phonological features. The main difference is that role shift agreement is a structural relation in the highest functional domain of the sentence, the SA(*)P, which is the highest projection of the extended CP-​domain. By contrast, verb agreement is realized in the lowest domain of the clause, the VP. Thus, role shift agreement is controlled by speech act related entities, namely the signer and the addressee of the utterance. Verb agreement, by contrast, is controlled by two arguments lexically selected by the verb. Note finally that further research is necessary to implement the difference in the onset of the non-​manuals in ASL depending on the type of embedding verb (i.e., doxastic verbs vs. proffering verbs) discussed above, if this generalization holds cross-​linguistically.

358

359

Role shift

16.5  Context-​shifting operators and indexicals The syntactic analysis sketched in (5), which takes role shift to be an instance of indirect speech, has the advantage that it accounts for the distribution of non-​manuals marking role shift. In addition, it predicts –​at least at first sight –​that indexicals in role shift are interpreted like indexicals in indirect speech, that is, they should receive their value in the actual context of utterance and not in the context of the reported or quoted utterance. According to Kaplan (1989), deictic expressions such as first and second person pronouns and indexical temporal and locative expressions are directly referential. Consequently, ‘[t]‌he semantic value of an indexical is fixed solely by the context of the actual speech act, and cannot be affected by any logical operators’ (Schlenker 2003: 29). This is illustrated by the English example in (6): (6)  [Context: Sue is talking to Paul in Berlin.] Mary told Peter in Hamburg that I saw you here.



In (6), the first and second person indexicals cannot be coreferent with (or bound by) Mary and Peter, the subject and object of the matrix clause. Instead, both indexicals directly refer to the actual speaker and addressee, i.e., Mary told Peter that Sue met Paul. Likewise, the locative adverb here is (at least preferably) interpreted in the context of the actual utterance and refers to Berlin.6 However, we already saw in Sections 16.1 and 16.2 that first and second person indexicals in role shift –​just as indexicals in direct quotation –​are interpreted in the context of reported speech and not in the context of the actual utterance. Consider example (3a) again, here repeated as (7). The second person indexical IX2 under role shift does not refer to the actual addressee of the narration but to the addressee of the hare’s utterance, i.e., the tortoise.

(7) 

HARE TYPICALLY PAM 3b TORTOISE LIKE MOCK 3b 

               rs

IX 2 FUNNY SO CL we: move ‘The hare typically likes to mock the tortoise: “It’s so funny how you walk”.’

(DGS)

Fortunately, the indirect speech analysis in (5) offers a solution for the problem of shifted indexicals at the syntax/​semantics interface: a point-​of-​view or role-​shift operator binds all indexical context variables in its scope (for a similar analysis, see Lillo-​Martin (1995), Quer (2005), and Schlenker (2017a,b)). Consequently, in role shift, the second person pronoun in (7) functions as a logophoric pronoun, which is not bound by the actual addressee but by the embedded addressee, the object of the speech act predicate MOCK. Shifted indexicals in embedded contexts such as (7) above are the consequence of a covert context-​shifting operator, which prevents the value of indexicals to be fixed by the context of the actual utterance. Schlenker (2003), following Kaplan (1989), calls such operators monsters. This analysis is supported by typological research on spoken languages that also pose a challenge to Kaplan’s generalization. Unlike English, languages like Slave or Amharic permit shifting indexicals in the scope of attitude verbs (Rice 1986; Schlenker 2003; Speas & Tenny 2003; Anand & Nevins 2004; Anand 2006). In such shifting languages, first and second person pronouns in embedded clauses such as that I saw you here in the English example (6) above can be bound in the context of the reported speech, i.e., by the matrix subject and object. Based on typological research on shifting spoken languages, Anand 359

360

Markus Steinbach

& Nevins (2004: 24) argue that indexical shifting is triggered by a set of context-​shifting (monstrous) operators and propose a general constraint on indexical shift: (8)  Shift-​Together Constraint: Shiftable indexicals must shift together. Example (7)  shows that from a typological perspective, sign languages belong to the group of shifting languages:  indexicals can be shifted in the scope of role shift. Schlenker (2017a) thus calls role shift a ‘super monster’ (for a different approach, see Zucchi (2004) and the discussion below).7 In addition, Schlenker (2017a) argues on the basis of examples like (9), which contain two indexicals (i.e., I X 1 and H E R E ) but do not allow mixed readings, that the Shift-​Together Constraint in (8) also holds for role shift in ASL and LSF. (9)  [Context: In 2010, I met Jean in LA. At the time, he often changed jobs and home bases.]                rs DATE 2010 PLACE LA JEAN SAY  DATE 2014 IX1 WORK HERE. ‘In 2010 in LA Jean said, “In 2014 I will work here”.’ (LSF, Schlenker 2017a: 15–​16) For sign languages such as ASL and LSF, the idea is that the non-​manual marking of the role shift overtly indicates the presence of a (super)monster triggering the shifting pattern exemplified in (9). Following this line of argumentation, indexicals in the scope of role shift always shift together. However, studies on other sign languages have revealed that the question of shifting indexicals is more complex. Quer’s (2005) classical example from LSC shows that not all indexicals need to shift. The AtRS in (10) contains again two indexicals, the first person indexical IX1 and the locative adverb HERE. Interestingly, only the first person pronoun is shifted in this example. The locative adverb remains unshifted and refers to the actual context of utterance, which is Barcelona. Similar mixed examples are also attested for Danish Sign Language (DSL) and DGS (Engberg-​Pedersen 1995; Herrmann & Steinbach 2012). (10)  [Context: Sentence uttered in Barcelona]            t

               rs-​ i

IXa MADRIDm MOMENT JOANi

THINK IX1-​i STUDY FINISH HEREb

‘When he was in Madrid, Joan thought that he would finish his study in Barcelona.’                                   (LSC, Quer 2005: 154) One possible explanation put forward in Hübl (2014) assumes that indexical expressions in (at least some) sign language differ in their shifting potential. Unlike first and second person indexicals, which seem to shift obligatorily, temporal and locative indexicals are more flexible. In the following example, the temporal indexical TOMORROW in (11a) is more likely to shift than the semantically similar temporal indexical TODAY in (11b). (11)  a.  [Context: Sentence uttered on Saturday] PAST THURSDAY M-​A-​R-​I-​A3a T-​I-​M3b MEET TELL

360

361

Role shift

               rs IX1 LIKE TOMORROW MOVIES GO

‘On Thursday, Maria and Tim met and she told him that she would like to go to the movies on Friday/​Sunday.’ b.  [Context: Sentence uttered on Thursday] PAST WEDNESDAY M-​A-​R-​I-​A3a T-​I-​M3b BOTH EAT IXL TELL

             rs IX1 LIKE TODAY DANCE GO

‘On Wednesday, Maria and Tim ate there together and she told him that she would like to go dancing on Thursday.’                       (DGS, Hübl 2014: 175) Comparing the shifting potential of different indexicals in DGS, Hübl observes that differences in the shifting potential of these indexicals depend on individual lexical-​iconic properties:  HERE and TODAY, which are both produced with a downward movement of the index hand, iconically refer to the actual location/​time of utterance and have thus a strong preference not to shift. By contrast, TOMORROW and YESTERDAY are not iconically linked to the actual context of utterance. Both signs use the midsagittal plane to express temporal relations metaphorically (Pfau et al. 2012: 189). Therefore, TOMORROW and YESTERDAY can but need not shift. And finally, the first and second person indexicals IX1 and IX2 are again iconically linked to the shifted signer and addressee making a shifted interpretation obligatory. This observation is in line with Quer’s example (10) where the locative indexical HERE, like its counterpart in DGS, does not shift. The observation that mixed interpretations are possible in sign language role shift depending on lexical properties of indexicals complies with observations made for free indirect discourse in spoken languages as illustrated in example (12) (Banfield 1977, 1982).8 (12)  Mary was walking home. Damn. Why was she so fucking tired today? Like in the role shift examples, in free indirect discourse, two contexts overlap and are simultaneously available to fix the value of indexical expressions: (i) CA, the context of the actual utterance (the context of the narrator) and (ii) CR, the context of the reported utterance (the context of the protagonist). In free indirect discourse, indexicals can, in principle, be interpreted in both contexts making mixed interpretations possible (Schlenker 2004; Sharvit 2008; Eckardt 2014). Leaving technical details aside, Eckardt (2014) argues that indexicals in free indirect discourse can be divided in two groups:  the first group includes tense (i.e., was walking and was in example (12)) and pronouns (i.e., she in (12)), which must be interpreted relative to CA. The second group comprises different kinds of indexical expressions such as temporal adverbials (i.e., today) and speaker-​oriented expressions (i.e., damn and fucking), which are interpreted relative to CR. It is obvious that this account can be extended to the interpretation of indexicals in role shift: (i) indexicals such as HERE and NOW that are iconically linked to the actual context of utterance are interpreted relative to CA; (ii) indexicals such as IX1 and IX2 that are iconically linked to the shifted context are interpreted relative CR, and (iii) indexicals such as TOMORROW and YESTERDAY that have no iconic preference can be interpreted relative to both contexts, CA and CR. 361

362

Markus Steinbach

Context shift analyses of role shift and free indirect discourse take as a basis an indirect speech analysis upgraded by context-​shifting operators. An alternative analysis proposed by Maier (2017) shifts the perspective completely and takes role shift and free indirect discourse to be an instance of direct speech (see also Maier 2015, 2016, 2018). Consequently, the analysis of ‘shifted’ indexicals proceeds in the opposite direction. Since in direct speech, CR is the dominant context, the interpretation of past tense and the third person pronoun she in example (12) as well as the non-​shifted indexical HERE in example (10) is achieved by a mechanism of unquotation. Using well-​established orthographic devices for quotation (quotation marks) and unquotation (brackets) the underlying structure of the free indirect discourse example in (12) and the role shift in (10) can be represented as follows: (13)  a.  Mary was walking home. “Damn. Why [was] [she] so fucking tired today?” b. IXa MADRIDm MOMENT JOANi THINK “IX1-​i STUDY FINISH [HEREb]” (LSC, Quer 2005) In both examples, the ‘unquotation marks’ (the brackets in (13)) indicate that expressions in their scope are interpreted “from the reporter’s perspective” (Maier 2017: 267), that is, ‘unquotation marks’ are the semantic counterpart of quotation marks. Quotation marks signal that the expressions they include are interpreted from the reported speaker’s perspective. Quotation marks are thus specifications of tokens. By contrast, ‘unquotation marks’ are specifications of content (for a similar analysis of mixed quotations in spoken languages, see Cappelen & Lepore (1997, 2012)). Like the context shift analysis, the unquotation analysis can be supplemented with the observation that in sign languages, indexicals differ in their shifting potential or, under the perspective of an unquotation analysis, in their potential to be affected by unquotation: indexicals such as HERE and NOW are highly accessible for unquotation and can thus occur in the scope of ‘unquotation marks’. By contrast, first and second person indexicals occur most likely in the scope of quotation marks. An alternative analysis to Hübl’s (2014) lexical-​iconic approach has been proposed by Maier (2017) and Hübl et al. (2019). Following Evans’ (2012) observation of second person attraction in spoken language quotation, Maier assumes that the general pragmatic principle of attraction in (14) is responsible for the regulation of unquotation (Maier 2017: 270): (14)  Attraction: When talking about the most salient entities in your immediate surroundings, use indexicals to refer to them directly. The principle of attraction predicts that contextually salient entities that can be directly referred to with indexicals are expected to attract the use of the corresponding indexicals. In quotational contexts, the application of attraction conflicts, however, with another general principle, the verbatim condition. Nevertheless, attraction should promote unquotation even in quotational contexts such as role shift if direct reference is possible. A first empirical study on attraction in AtRS in DGS shows that indexicals can receive unquoted (unshifted) interpretations in role shift (Hübl et al. 2019). Especially reported utterances about the addressee have a clear preference for the use of the indexical IX2 even if this report violates the verbatim condition. Therefore, the example with attraction in (15b) is preferred over the corresponding verbatim example in (15a) using the name of the actual addressee Anna.

362

363

Role shift

(15) Felicia signs (original utterance):

IX1 DREAM ANNA IX3 LOTTO WIN

‘I dreamed that Anna won the lottery.’

a. Verbatim condition Tim reports, addressing Anna:

               rs

FELICIA 3INFORM1 

IX1 DREAM ANNA IX3 LOTTO WIN

FELICIA 3INFORM1 

IX1 DREAM IX2 LOTTO WIN

‘Felicia told me, “I have dreamed that Anna won the lottery”.’

b. Attraction condition Tim reports, addressing Anna:

             rs

‘Felicia told me, “I have dreamed that you won the lottery”.’         (DGS, Hübl et al. 2019: 192)

So far, we have seen that there are competing analyses of the interpretation of indexicals in role shift. One way to go is to analyze role shift as a specific kind of indirect speech with shifting (monstrous) operators. The alternative approach assumes that role shift is an instance of direct speech with the option of unquotation, possibly constrained by the principle of attraction. The empirical verification of the predictions of both kinds of approaches is, however, still a desideratum. In the next section, we turn to a third kind of approach that focuses on the demonstrative (iconic) power of role shift.

16.6  Gestural demonstrations A third competing approach takes role shift as a specific form of direct speech including non-​linguistic action reports. Building on classical ideas of quotation as demonstrations, Kathryn Davidson (2015) develops a powerful theory of role shift as (gestural) demonstration that does not only account for both kinds of role shift but also for role shift in both modalities (for a similar idea, see Zucchi (2011)). One pillar of this account is Donald Davidson’s (1979) seminal formal theory of the semantics of quotation marks in written language. According to Donald Davidson, “[q]‌uotation is a device for pointing to inscriptions (or utterances) and can be used, and often is, for pointing to inscriptions or utterances spatially or temporally outside the quoting sentence” (Donald Davidson 1979: 38). Following this analysis, the sentence in (16a) receives the semantic representation in (16b): (16)  a.  Quine says that quotation “… has a certain anomalous feature.” b. Quine says, using words of which these are a token, that quotation has a certain anomalous feature. The demonstration theory treats quotation marks as a referring device similar to (gestural) pointing. This brings us to the second important pillar of the demonstration theory of role shift. The idea of quotation as demonstrations has been further developed by Clark & Gerrig (1990), who develop a more general cognitive theory of gestural demonstrations. Following this idea, linguistic quotations are simply a special kind of demonstration. Speakers and signers can use (different parts of) their body to demonstrate different

363

364

Markus Steinbach

kinds of linguistic and non-​linguistic actions. According to this theory, AtRS is a typical instance of linguistic demonstration, i.e., demonstrating the linguistic actions (utterance) of another person. By contrast, AcRS is typically used to demonstrate non-​linguistics actions such as sounds, facial expressions, or movements of the body or body parts. The seminal aspect of Clark & Gerrig’s (1990) account is that it combines both aspects and develops a unified theory of demonstration.9 We already saw that quotations do not only consist of linguistic material but may also depict the speech act, the facial expression, the voice, or non-​linguistic gestures of the quoted person. This is illustrated by the following spoken languages examples (Kathryn Davidson 2015:  485, 488). In the first example (17a), the speaker demonstrates Bob’s actions of the mouth and the hands. In the second example (17b), a sound produced by Bob is simultaneously demonstrated together with his utterance. And in the third example (17c), the speaker depicts Mary’s facial expression when she uttered the sentence I’ll never be prepared (for non-​linguistic demonstrations in spoken languages, see also Günthner (1999); Streeck (2002); Stec et  al. (2016); for a comparison of gestural demonstrations sign and spoken language narration, see Earis & Cormier (2013) and Bressem et al. (2018)). (17)  a.  Bob was eating like [gobbling gesture with hands and face]. b. Bob saw the spider and was like “ahh! [in a scared voice].” :-​/​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​ c. I saw Mary studying for finals and she was all “I’ll never be prepared” We already saw, that role shift in sign languages can also be used to demonstrate non-​ linguistic actions of other persons. These actions can accompany the demonstration of utterances (i.e., linguistic actions) or they can be used in isolation to demonstrate the non-​linguistic behavior of another person. The former is illustrated by example (18), taken from the fable The shepherd boy and the wolf (cf. Herrmann & Pendzich 2018). In (18), the signer reports the utterance of the villagers who ask the boy whether they can help him.10 At the same time, the accompanying facial expression ‘fe’ illustrated in Figure 16.2 depicts the exhausted faces of the villagers who ran to the boy because they thought that he urgently needed help.         fe          rs

(18)

3HELP1

WHAT?

          (DGS)

‘Can we help you?’

Figure 16.2  Exhausted villagers asking shepherd boy (© SignLab Göttingen)

364

365

Role shift

By contrast, the stills in Figure 16.3 illustrate two purely gestural demonstrations. In the left sequence, which is partly repeated from Figure 16.1 above, the signer is demonstrating the hare, who is demonstrating the slow and clumsy movement of the tortoise. The pictures on the right show the demonstration of the behavior of the bored shepherd boy, holding a crook and looking around (for a more detailed discussion, see Herrmann & Pendzich (2018)).

Figure 16.3  Left: Hare imitating tortoise. Right: Shepherd boy holding a crook and looking around (DGS, Herrmann & Pendzich 2018: 283; © John Benjamins, reprinted with permission)

These examples from spoken and sign language show that demonstrations are a powerful instrument to refer to linguistic and non-​linguistic actions of other persons. In both modalities, the body of the signer/​speaker is systematically used to perform demonstrations. In spoken language, gestural demonstrations include vocal gestures as well as manual and non-​manual co-​speech gestures, as illustrated in the examples in (17) above. In sign languages, demonstrations are performed manually and non-​manually. Clark & Gerrig (1990) developed a more general pragmatic account of quotation as demonstration in spoken languages. Unlike linguistic descriptions, demonstrations are nonserious actions such as playing, acting, and pretending, which are used to depict rather than describe. In addition, demonstrations are typically selective since they only depict relevant aspects of the referent. Clark & Gerrig divide demonstrations in four aspects: (19)  a.  Depictive aspects: All aspects that are intended to depict aspects of the referent b. Supportive aspects: Aspects that are necessary in the performance of (a) c. Annotative aspects: Comments on the demonstration by the agent who demonstrates d. Incidental aspects: All remaining aspects the demonstrating agent has no intention about Demonstrations are thus complex intentional actions that require some pragmatic reasoning which parts of these actions are depictive (i.e., the most relevant part) and which are supportive, annotative, and incidental. This property of demonstration is summarized by the decoupling principle (Clark & Gerrig 1990: 768): (20)  Decoupling principle: Demonstrators intend their recipients to recognize different aspects of their demonstrations as depictive, supportive, and annotative. According to Clark & Gerrig (1990:  769), the performance and the interpretation of a demonstration is faced with the problem “how to decouple the depictive, supportive, annotative, and incidental aspects from one another”, that is, both the demonstrator and

365

366

Markus Steinbach

the addressee need to know to which aspect of the demonstration a facial expression, a handshape, a body posture, a movement, or a sound (in spoken language demonstrations) belongs to. It is obvious that depictive aspects are the privileged part of a demonstration since they depict the action of the referent. The partiality principle accounts for this special status of depictive aspects: (21)  Partiality principle: Demonstrators intend the depictive aspects of a demonstration to be the demonstration proper, the primary point of their demonstration. Two more principles account for the intention of the demonstrator and the formal marking of demonstrations: (22)  Selectivity principle: Demonstrators intend their demonstrations to depict only selective aspects of the referents under a broad description. (23)  Markedness principle: Whenever speakers mark an aspect of a quotation, they intend their addressees to identify that aspect as nonincidental –​that is, as depictive, supportive, or annotative. With these principles, Clark & Gerrig offer a comprehensive descriptive analysis of the pragmatics of demonstrations in spoken languages that can also be applied to sign languages (cf. Liddell & Metzger 1998). What Clark & Gerrig do not provide, however, is a formal theory of demonstrations and the interaction between gestural demonstrations and linguistic descriptions. Interestingly, the first formal account of demonstrations was developed in connection with the analysis of role shift in sign languages. Kathryn Davidson (2015) takes up the two classical approaches by Donald Davidson (1979) and Clark & Gerrig (1990) and develops a unified cross-​modal formal theory of demonstration that accounts for both, AtRS and AcRS. The starting point of her analysis is the obvious correspondence between spoken language quotative constructions such as be like and AtRS in sign languages.11 Both constructions introduce a demonstration defined as follows (cf. Kathryn Davidson 2015: 487): (24)

Definition: A demonstration d is a demonstration of e (i.e., demonstration(d, e) holds) if d reproduces properties of e and those properties are relevant in the context of speech.

The properties of a speech event e, that are reproduced (or depicted) include linguistic and non-​linguistic material, that is, just like in Clark & Gerrig’s account, demonstration can be used to depict (or reproduce) a wide range of objects and/​or actions, including linguistic and non-​linguistic actions (cf. Kathryn Davidson 2015: 487): (25)

Properties of a speech event include, but are not limited to words, intonation, facial expressions, sentiment, and/​or gestures.

366

367

Role shift

Schlenker (2017b) offers a comparable analysis which is based on the notion of maximal iconicity. He formulates a general constraint on role shift: in role shift, all expressions that can receive an iconic interpretation must be interpreted iconically. Schlenker’s analysis combines insights of indirect speech analyses discussed in the previous section and demonstration analyses discussed in this section. In (26), the embedded clause (i.e., ‘IP’) is the sister of a role shift operator and subject to the following condition (Schlenker 2017b: 40): (26)

Maximal Iconicity of Role Shift RSi IP is only acceptable relative to a syntactic environment a_​_​b, a context c, an assignment function s and a situation w if RSi IP is interpreted maximally iconically relative to a_​_​b, c, s and w.

In the following discussion, I put aside the differences between indirect and direct speech analyses and focus on the illustration of how the demonstration theory analyzes AtRS and AcRS (for a critical discussion of both approaches, see Kathryn Davidson (2015) and Schlenker (2017b)). Based on the definition of demonstration in (26) above, Kathryn Davidson defines the semantics of be like and role shift (‘rs’) as follows: (27) a.  [[like]] = λdλe [demonstration(d,e)] b. [[rs]] = λdλe [demonstration(d,e)] Both constructions introduce a demonstration in the semantic representation of the corresponding utterance. In spoken languages like English, the demonstration has a lexical marker (probably accompanied by gestural markers such as change of voice, body posture, and facial expressions). By contrast, sign languages use non-​manual markers as described in Section 16.4 above to indicate demonstrations. With this account, we are now able to analyze the examples discussed above. For simple examples such as (28) and (29), the analysis is straightforward: (28) a. John was like “I‘m happy” b. ∃e. [agent(e, John) ∧ demonstration(d1,e)] (where d1 is the reporter’s reproduction of John’s speech)        rs

(29) a. b.

MOM: 

IX1 BUSY

∃e. [agent(e, mother) ∧ demonstration(d2,e)] (where d2 is the reporter’s reproduction of mother’s signing)                   (ASL, Lillo-​Martin 1995: 162)

Kathryn Davidson’s theory can be extended to account for pure gestural demonstrations as illustrated in the spoken language example in (17a) analyzed in (30) and the sign language AcRS in Figure 16.3 (right) analyzed in (31). In both cases, the demonstration does not include any linguistic material but only gestural components. (30)

∃e. [eat(e, Bob), agent(e, Bob) ∧ demonstration(d3,e)] (where d3 is the reporter’s reproduction of Bob’s eating)

367

368

Markus Steinbach

(31) ∃e. [agent(e, shepherd boy) ∧ demonstration(d4,e)] (where d4 is the reporter’s reproduction of the shepherd boy’s behavior) Demonstrations are thus a powerful means to reproduce the linguistic and non-​linguistic actions of other people. As opposed to spoken languages, sign languages have the advantage that they use the same modality as manual and non-​manual gestures. Therefore, gestural demonstrations overlap with and can thus be integrated more easily in the linguistic system. In role shift, demonstrations are typically restricted to the articulatory system of sign languages, that is, they are realized with the same articulators as linguistic utterances (hands, upper part of the body, head, face). As a consequence, gestural and linguistic aspects of demonstrations cannot easily be kept apart. I will come back to this aspect of demonstrations in sign languages below. Kathryn Davidson’s theory is an important step towards a better understanding of role shift in sign language in particular and the interaction of language and gesture in general. On the one hand, it offers a basis for a unified analysis of AtRS and AcRS by integrating insights of Donald Davidson’s theory of quotation in written language and Clark & Gerrig’s theory of demonstration in written and spoken language. On the other hand, it provides a unified formal semantic implementation of demonstrations across modalities which is able to account for modality-​independent aspects (demonstration(d,e) is a general two-​place predicate available in both modalities) and for modality-​specific tendencies (the visual-​gestural modality of sign languages permits a direct integration of manual and non-​manual gestural demonstrations). However, Kathryn Davidson’s theory needs further refinement and extensions to account for more complex examples of gestural demonstrations which are attested especially in sign language role shift. One important property of role shift is that it can be simultaneously used for multiple demonstrations. We discuss this aspect in the following section in more detail. Another property that cannot easily be accounted for by Kathryn Davidson’s theory is the complex interaction of linguistic and non-​linguistic material in demonstrations. Reconsider example (17c) above, repeated as (32) below. In this example, the speaker is reproducing the words I’ll never be prepared to demonstrate the speech event of Mary. At the same time, s/​he is using a specific facial expression to demonstrate Mary’s non-​linguistic behavior. :-​/​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​ (32) a.  I saw Mary studying for finals and she was all  “I’ll never be prepared” b. ∃e. [agent(e, Mary) ∧ demonstration(d5,e)] (where d5 is the reporter’s reproduction of Mary’s speech and facial expression) Recall from the definition in (24) and (25) above that in Kathryn Davidson’s account both kinds of demonstrations (linguistic and non-​ linguistic ones) are treated equally:  “Properties of a speech event include […] words, […] facial expressions, […]” (Kathryn Davidson 2015: 487). We already saw that this limitation to speech events does not account for all kinds of demonstrations in role shift. In addition, the pragmatics of the demonstration approach seems to be too powerful for classical kinds of speech reports such as direct and indirect speech as well as for typical cases of mixed quotation

368

369

Role shift

in spoken language such as example (33). In mixed quotations, the quoted material is not only demonstrated but also integrated in the linguistic structure, i.e., used and mentioned at the same time (for mixed quotations, see Note 8, Cappelen & Lepore (1997) and Brendel et al. (2011)). (33) Lena said that this piece of art “is difficult to understand.”

(Brendel et al. 2011: 4)

Therefore, Maier (2017, 2018) argues that speech reports are simultaneous mixtures of direct speech and action reports. In Maier’s theory, demonstrations in speech reports serve as optional modifiers. Following this insight, the semantic representation of example (17c/​32a) would look as follows: (34)  ∃e. [agent(e, Mary) ∧ form(e,‘ I’ll never be prepared’) ∧ demonstration(d6,e)] (where d6 is the reporter’s reproduction of Mary’s facial expression) One advantage of such a hybrid account integrating formal aspects of linguistic expressions is that it can more easily account for the complex behavior of indexicals in role shift as discussed in Section 16.5 above. A second advantage is that it provides a tool to distinguish between more linguistic (AtRS) and more gestural (AcRS) kinds of role shift. And finally, it may be extended to more complex interactions of linguistic and gestural elements in AcRS where the gestural demonstration is not simply accompanying or replacing the linguistic demonstration. Consider the following two examples in Figure 16.4 below which are taken from a DGS version of the fable The lion and the mouse. In the picture on the left, the signer demonstrates the falling down of the mouse from the lion’s nose. The picture on the right demonstrates the catching of the mouse by the lion. In both demonstrations linguistic and gestural demonstrations interact in an interesting way.

Figure 16.4  Left: Mouse falling down. Right: Lion catching mouse (© SignLab Göttingen)

In the left picture, the signer is using two (linguistic) classifier handshapes simultaneously. While the non-​dominant (right) hand is a body part classifier for small animals, the dominant (left) hand is a whole-​entity classifier for animals. The facial expression gesturally demonstrates the surprised face of the mouse and, at the same time, the starting point of the mouse’s falling, which is the lion’s nose. In the right picture, the signer combines the agreement verb 1CATCH3 (where ‘3’ corresponds to the locus of the mouse’s landing) with a manual and non-​manual gestural demonstration of the catching. That is, in both examples, the signer is using gestural demonstrations in addition to (partial) linguistic descriptions to report the same event.

369

370

Markus Steinbach

A second challenging interaction of gestural and linguistic material in role shift can also be found in the fables. In the following example taken from the The shepherd boy and the wolf, the signer demonstrates the behavior of the shepherd boy, who is calling the villagers. This demonstration contains linguistic elements (WOLF INDEX SHEEP EAT and non-​manual agreement, see Section 16.4) as well as gestural elements (HEY and a very expressive way of signing). In addition, the whole demonstration is framed by the sign SCREAM, a comment of the narrator. Interestingly, this linguistic description is not produced in neutral position but clearly in the scope of role shift as can be seen in the picture on the right in Figure 16.5.                             rs

(35)

SCREAM HEY WOLF INDEX SHEEP EAT INDEX HEY SCREAM 

          

(DGS)

Figure 16.5  Shepherd boy shouting for villagers. Left: HEY. Right: SCREAM. (© SignLab Göttingen)

It is obvious that it is not the shepherd boy who is producing the sign SCREAM. Consequently, this sign cannot be not part of the demonstration of the boy’s utterance but is a description of the event by the signer/​narrator. To account for such examples, a theory of role shift as demonstration needs to be updated with a theory of unquotation and attraction along the lines of Maier (2014, 2017), cf. also Section 16.5 above.

16.7  Multiple perspectives So far, we saw that role shift is a powerful and flexible means of reporting the utterance and/​or actions of other people. In addition, role shift is also a highly expressive construction that can be systematically used in sign language narration to express thoughts, attitudes, utterances, and actions from the point of view of one or more characters in a flexible and highly complex way (Poulin & Miller 1995; Earis & Cormier 2013). On basis of the Göttingen DGS fable corpus, Herrmann & Pendzich (2018) describe interesting cases of complex demonstrations simultaneously reproducing the actions of more than one character (for descriptions of similar examples, see Metzger (1995); Liddell & Metzger (1998); Aarons & Morgan (2003); Pyers & Senghas (2007); Fischer & Kollien (2010); Jarque & Pascual (2016); Barberà & Quer (2018)). In the fables, signers systematically use different body parts in role shift to demonstrate linguistic and non-​linguistic actions of different characters simultaneously. Consider the left picture in Figure 16.6 first. Here, the signer demonstrates the climbing of the mouse over the lion’s nose. The character of the mouse is associated with three parts of the body:  the dominant hand (whole-​entity classifier), the non-​dominant hand (body part 370

371

Role shift

classifier), and the face (facial expression). At the same time, parts of the head (especially the nose) are associated with the sleeping lion, the location of the mouse’s climbing. Hence, different body parts can be simultaneously associated with different characters in role shift. In this example, the dominant (or salient) character is the mouse, which is anaphorically represented by three parts of the body. The mouse is the figure in this example represented by the moving dominant hand. By contrast, the sleeping lion is less salient since its head only serves as the ground of the movements of the mouse.

Figure 16.6  Left: Mouse climbing over the face of the lion. Right: Villagers running to and asking shepherd boy (© SignLab Göttingen)

The three pictures on the right of Figure 16.6, taken again from the fable The shepherd boy and the wolf, illustrate another complex interaction of multiple demonstrations. In the first picture, the signer uses AcRS to demonstrate the running action of the villagers, who are represented by the whole body of the signer. In the second picture, the signer uses a more complex representation for the same event. The movement of the villagers is now demonstrated with both hands (whole-​entity classifier) and the facial expression. However, the body does not represent the villagers anymore but the shepherd boy, the target of the movement of the villagers. This combination of perspectives is maintained in the third picture, where the signer demonstrates the villagers asking the boy whether they can help him. Recall from the previous section that the endpoint of the path movement of the agreement verb HELP is the body of the signer representing the addressee of the question, i.e., the shepherd boy. Again, different parts of the body are used to represent different characters simultaneously. These examples show the complexity and expressive power of role shift in signed discourse and narration. Sign languages exploit two modality-​specific properties:  the possibility to use different articulators simultaneously and the fact that manual and non-​ manual gestures use the same modality as sign languages. To account for this complex interaction of body parts, Herrmann & Pendzich (2018) built on Meir et  al.’s (2007) seminal analysis of body as subject. According to Meir et al., the body of the signer can be used to represent one prominent argument of the event, namely the agent. The signer can thus use his or her body not only for linguistic descriptions but also for mimetic depictions or  –​following Clark & Gerrig (1990)  –​for gestural demonstrations. The examples above show that the pattern is even more complex. Signers cannot only use their body holistically to demonstrate the actions of one character. They can also split their body and use different body parts to represent different characters. Consequently, in role shift, different body parts can express different demonstrations simultaneously. A  corresponding extension of Kathryn Davidson’s (2015) formal semantic analysis of role shift in sign languages in (36a) could integrate multiple demonstrations that can be linked to different body parts (‘bp’) as illustrated in (36b). 371

372

Markus Steinbach

(36)  a.  [[rs]] = λdλe [demonstration(d,e)] b. [[rs]] = λd1 … λdn λe [demonstration(d1,e) ∧ … ∧ demonstration(dn,e)]     |    |       bp1   bpn

16.8  Conclusion: role shift and modality In this chapter, I discussed different aspects of role shift in sign languages that have been in the focus of research in the last years: the non-​manuals used to mark role shift, the syntactic structure of role shift (direct vs. indirect speech), the assumption of a (syntactic) role shift or point-​of-​view operator, the interpretation of indexical expressions, and the demonstrative power of role shift as speech and action report in spoken and sign languages. The discussion of new theoretical developments has shown the tremendous headway research made in this field. Each theory provides a conclusive analysis of specific aspects of role shift in sign languages and comparable quotative constructions in spoken languages. However, so far, a comprehensive theory of role shift is still a pie in the sky (cf., for instance, the two recent target articles on visible meaning and reported speech in Theoretical Linguistics (Schlenker 2018) and Linguistic Typology (Spronck & Nikitina 2019) and the corresponding comments). We have seen that role shift is an expressive and powerful device that systematically integrates and combines linguistic and gestural components to demonstrate linguistic and non-​linguistic actions of other persons. Role shift is thus an interface phenomenon per se. It is subject to linguistic restrictions but  –​at the same time  –​includes gestural components. It is not surprising that the distributions and functions of different kinds of classifiers in role shift gained so much attention (Aarons & Morgan 2003; Kathryn Davidson 2015; Barberà & Quer 2018; Herrmann & Pendzich 2018, among others), especially because analyses of classifiers in sign languages highlight both, linguistic and gestural aspects (see e.g. Emmorey & Herzig 2003; Benedicto & Brentari 2004; Schembri et al. 2005; Zwitserlood 2012; Brentari et al. 2012). Future research will reveal more modality-​independent and modality-​specific aspects of reported speech and reported action. In this context, a thorough empirical description and theoretical implementation of mixed forms of reported speech and reported action such as free indirect discourse in written language, complex AcRS in sign languages, be-​like constructions combining gestural and linguistic elements in spoken languages, and the acquisition of role shift in both modalities will be highly relevant for a better understanding of multimodal communication and the communicative functions of gestural demonstrations in the visual-​gestural and the auditory-​oral modality. Note finally that a unified theory may also aim at integrating orthographic demonstrations used in the written modality of spoken languages.12

Notes 1 For a general overview of sentential complementation in sign languages, see Tang & Lau (2012) and Pfau & Steinbach (2016). For a description and analysis of sentential complementation in individual sign languages, see Davidson & Caponigro (2016) for ASL, van Gijn (2004) for Sign Language of the Netherlands (NGT), Geraci et al. (2008) and Geraci & Aristodemo (2016) for Italian Sign Language (LIS), Quer (2016) for LSC, Hauser (2019) for LSF, and Göksel & Kelepir (2016) for Turkish Sign Language (TID).

372

373

Role shift 2 Glossing conventions: sign language examples are glossed according to common conventions: IX –​ index (pointing sign); CLb –​body part classifier; CLwe –​(whole) entity classifier; PAM –​ person agreement marker; non-​manual markers: rs –​role shift, t –​topic, fe –​facial expression, eg –​eye gaze, hp –​head position, bl –​body lean. 3 In a recent paper, Spronck & Nikitina (2019) argue that “reported speech constitutes a dedicated syntactic domain, i.e., crosslinguistically it involves a number of specific/​characteristic phenomena that cannot be derived from the involvement of other syntactic structures in reported speech, such as subordination” (Spronck & Nikitina 2019: 120). 4 Note that sign languages have the modality-​specific possibility to integrate and grammaticalize gestural non-​manual markers such as eye gaze and head movement (Pfau & Steinbach 2011; van Loon et al. 2014). 5 For a more recent and more elaborated implementation of the same idea, see Krifka (2020). For a critical discussion of the syntactic implementation of speech act related categories, see Gärtner & Steinbach (2005). 6 Temporal and locative indexicals seem to exhibit a greater flexibility than person indexicals and permit shifted interpretations even in languages like German or English (cf. e.g. Schlenker 2003; Brendel et al. 2011). 7 According to Schlenker (2017a, 2017b), role shift has two specific super properties:  this modality-​specific non-​manual super monster shifts contexts not only in attitude reports but also in action reports and it has an iconic (hyperintensional) component. I come back to iconicity and role shift in Section 16.6. 8 Another prominent example are mixed quotations in written languages, which combine indirect and direct speech as illustrated by the Donald Davidson’s (1979: 28) famous example in (i) (for mixed quotations, see Cappelen & Lepore (1997); Maier (2014)). Like free indirect discourse, mixed quotations pose an interesting challenge for the interpretation of indexicals as can be seen in example (ii) taken from Cappelen & Lepore (1997: 429), where the first person pronoun us is bound by the possessive pronoun in the matrix clause (for a discussion of similar examples, see also Maier (2016) and Brendel et al. (2011)). (i) Quine says that ‘… quotation has a certain anomalous feature’. (ii) Mr. Greenspan said he agreed with Labor Secretary R.B. Reich ‘on quite a lot of things’. Their accord on this issue, he said, has proved ‘quite a surprise to both of us’. 9 Note that Clark & Gerrig develop a use-​ based (pragmatic) theory of demonstration. Consequently, linguistic demonstrations do not simply point to inscriptions but to utterances, i.e., to communicative acts (for differences between Donald Davidson’s and Clark & Gerrig’s theory of demonstration, see Clark & Gerrig (1990: 801)). 10 Strictly speaking, the villagers do not ask ‘can we help you’ (i.e., 1HELP2) but ‘can someone help me’ (i.e., 3HELP1). I come back to this seeming agreement mismatch below. 11 Note that Kathryn Davidson (2015) provides a unified demonstrational analysis of role shift and classifiers (cf. also Zucchi 2011). Since this chapter is on role shift in sign language, I ignore classifiers in the following discussion. I briefly come back to this issue in the conclusion in the context of modality-​specific aspects of gestural demonstrations. 12 Various devices are relevant in this context, for example pictorial devices such as emojis (Gawne & McCulloch 2019), graphematic devices such as letter replications (Fuchs et  al. 2019), or speech and thought balloons (probably in combination with facial expressions) used in comics (Maier 2019, 2020).

References Aarons, Debra & Ruth Morgan. 2003. Classifier predicates and the creation of multiple perspectives in South African Sign Language. Sign Language Studies 3(2). 125–​156. Anand, Pranav. 2006. De de se. Cambridge, MA: MIT PhD dissertation. Anand, Pranav & Andrew Nevins. 2004. Shifty operators in changing contexts. Proceedings of Semantics and Linguistic Theory (SALT) 14. 20–​37. Bahan, Benjamin, Judy Kegl, Robert G. Lee, Dawn MacLaughlin, & Carol Neidle. 2000. The licensing of null arguments in American Sign Language. Linguistic Inquiry 31(1). 1–​27.

373

374

Markus Steinbach Banfield, Ann. 1977. Narrative style and the grammar of direct and indirect speech. Foundations of Language 10(1). 1–​39. Banfield, Ann. 1982. Unspeakable sentences:  Narration and representation in language of fiction. London: Routledge & Kegan Paul. Barberà, Gemma & Josep Quer. 2018. Nominal referential values of semantic classifiers and role shift in signed narratives. In Annika Hübl & Markus Steinbach (eds.), Linguistic foundations of narration in spoken and sign languages, 251–​274. Amsterdam: John Benjamins. Benedicto, Elena & Diane Brentari. 2004. Where did all the arguments go? Argument-​changing properties of classifiers in ASL. Natural Language & Linguistic Theory 22(4). 743–​810. Brendel, Elke, Jörg Meibauer, & Markus Steinbach. 2011. Exploring the meaning of quotation. In Elke Brendel, Jörg Meibauer, & Markus Steinbach (eds.), Understanding quotation, 1–​33. Berlin: Mouton de Gruyter. Brentari, Diane, Marie Coppola, Laura Mazzoni, & Susan Goldin-​Meadow. 2012. When does a system become phonological? Handshape production in gesturers, signers, and homesigners. Natural Language & Linguistic Theory 30(1). 1–​31. Bressem, Jana, Silva H. Ladewig, & Cornelia Müller. 2018. Ways of expressing action in multimodal narrations –​the semiotic complexity of character viewpoint. In Annika Hübl & Markus Steinbach (eds.), Linguistic foundations of narration in spoken and sign languages, 224–​249. Amsterdam: John Benjamins. Cappelen, Herman & Ernest Lepore. 1997. Varieties of quotation. Mind 106(423). 429–​450. Cappelen, Herman & Ernest Lepore. 2012. Quotation. In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. http://​plato.stanford.edu/​archives/​spr2012/​entries/​quotation (28 May, 2020). Clark, Herbert H. & Richard J. Gerrig. 1990. Quotations as demonstrations. Language 66(4). 764–​805. Cormier, Kearsy, Sandra Smith, & Zed Sevcikova-​Sehyr. 2015. Rethinking constructed action. Sign Language & Linguistics 18(2). 167–​204. Cormier, Kearsy, Sandra Smith, & Martine Zwets. 2013. Framing constructed action in British Sign Language narratives. Journal of Pragmatics 55. 119–​139. Davidson, Donald. 1979. Quotation. Theory and Decision 11(1). 27–​40. Davidson, Kathryn. 2015. Quotation, demonstration, and iconicity. Linguistics and Philosophy 38(6). 477–​520. Davidson, Kathryn & Ivano Caponigro. 2016. Embedding polar interrogative clauses in American Sign Language. In Roland Pfau, Markus Steinbach, & Annika Herrmann (eds.), A matter of complexity. Subordination in sign languages, 151–​181. Berlin: de Gruyter Mouton and Ishara Press. Dudis, Paul G. 2004. Body partitioning and real space blends. Cognitive Linguistics 15(2). 223–​238. Earis, Helen & Kearsy Cormier. 2013. Point of view in British Sign Language and spoken English narrative discourse: The example of ‘The Tortoise and the Hare’. Language and Cognition 5(4). 313–​343. Eckardt, Regine. 2014. The semantics of free indirect discourse. How texts allow us to mindread and eavesdrop. Leiden: Brill. Emmorey, Karen & Melissa Herzig. 2003. Categorical vs. gradient properties in classifier constructions in ASL. In Karen Emmorey (ed.), Perspectives on classifier constructions in sign languages, 221–​246. Mahwah, NJ: Lawrence Erlbaum. Emmorey, Karen & Judy Reilly. 1998. The development of quotation and reported action: Conveying perspective in ASL. In Eve V. Clark (ed.), Proceedings of the Twenty-​Ninth Annual Stanford Child Language Research Forum, 81–​90. Stanford, CA: CSLI Publications. Engberg-​Pedersen, Elisabeth. 1995. Point of view expressed through shifters. In Karen Emmorey & Judy Reilly (eds.), Language, gesture, and space, 133–​154. Hillsdale, NJ: Lawrence Erlbaum. Evans, Nicholas. 2012. Some problems in the typology of quotation:  A canonical approach. In Dunstan Brown, Marina Chumakina, & Greville G. Corbett (eds.), Canonical morphology and syntax, 66–​98. Oxford: Oxford University Press. Fischer, Renate & Simon Kollien. 2010. Gibt es Constructed Action in Deutscher Gebärdensprache und in Deutsch (in der Textsorte Bedeutungserklärung)? Das Zeichen 86. 502–510. Fuchs, Susanne, Egor Savin, Stephanie Solt, Cornelia Ebert, & Manfred Krifka. 2019. Antonym adjective pairs and prosodic iconicity: Evidence from letter replications in an English blogger corpus. Linguistics Vanguard 5(1). 1–​15.

374

375

Role shift Gärtner, Hans-​Martin & Markus Steinbach. 2005. A skeptical note on the syntax of speech acts and point of view. In Patrick Brandt & Eric Fuß (eds.), Form, structure, and grammar, 313–​322. Berlin: Akademie Verlag. Gawne, Lauren & Gretchen McCulloch. 2019. Emoji as digital gestures. Language@Internet 17, article 2. Geraci, Carlo & Valentina Aristodemo. 2016. An in-​depth tour into sentential complementation in Italian Sign Language. In Roland Pfau, Markus Steinbach, & Annika Herrmann (eds.), A matter of complexity. Subordination in sign languages, 95–​150. Berlin: de Gruyter Mouton and Ishara Press. Geraci, Carlo, Carlo Cecchetto, & Sandro Zucchi. 2008. Sentential complementation in Italian Sign Language. In Michael Grosvald & Dionne Soares (eds.), Proceedings of the thirty-​eighth Western Conference on Linguistics, 46–​58. Davis, CA: University of California Department of Linguistics. Göksel, Aslı & Meltem Kelepir. 2016. Observations on clausal complementation in Turkish Sign Language. In Roland Pfau, Markus Steinbach, & Annika Herrmann (eds.), A matter of complexity. Subordination in sign languages, 65–​94. Berlin: de Gruyter Mouton and Ishara Press. Günthner, Susanne. 1999. Polyphony and the ‘layering of voices’ in reported dialogues: An analysis of the use of prosodic devices in everyday reported speech. Journal of Pragmatics 31(5). 685–​708. Hauser, Charlotte. 2019. Subordination in LSF. Nominal and sentential embedding. Paris: Université de Paris PhD dissertation. Herrmann Annika & Nina-​Kristin Pendzich. 2018. Between narrator and protagonist in fables of German Sign Language. In Annika Hübl & Markus Steinbach (eds.), Linguistic foundations of narration in spoken and sign languages, 275–​308. Amsterdam: John Benjamins. Herrmann, Annika & Markus Steinbach. 2012. Quotation in sign languages  –​a visible context shift. In Ingrid van Alphen & Ingrid Buchstaller (eds.), Quotatives. Cross-​linguistic and cross-​ disciplinary perspectives, 203–​228. Amsterdam: John Benjamins. Hübl, Annika. 2014. Context shift (im)possible: Indexicals in German Sign Language. In Martin Kohlberger, Kate Bellamy, & Eleanor Dutton (eds.), Proceedings of the 21st Conference of the Student Organization of Linguistics of Europe (ConSOLE) 21, 171–​183. Leiden:  Leiden University Centre for Linguistics. Hübl, Annika, Emar Maier, & Markus Steinbach. 2019. To shift or not to shift. Quotation and attraction in DGS. Sign Language & Linguistics 22(2). 171–​209. Janzen, Terry. 2004. Space rotation, perspective shift, and verb morphology in ASL. Cognitive Linguistics 15(2). 149–​174. Jarque, Maria Josep & Esther Pascual. 2016. Mixed viewpoints in factual and fictive discourse in Catalan Sign Language narratives. In Barbara Dancygier, Lu Wei-​Lun, & Arie Verhagen (eds.), Viewpoint and the fabric of meaning: Form and use of viewpoint tools across languages and modalities, 259–​280. Berlin: de Gruyter Mouton. Kaplan, David. 1989. Demonstratives. In Joseph Almog, John Perry, & Howard Wettstein (eds.), Themes from Kaplan, 481–​614. Oxford: Oxford University Press. Köder, Franziska. 2016. Between direct and indirect speech:  The acquisition of pronouns in reported speech. Groningen: Rijksuniversiteit Groningen PhD dissertation. Köder, Franziska. 2018. Reporting vs. pretending. Degrees of identification in role play and reported speech. In Annika Hübl & Markus Steinbach (eds.), Linguistic foundations of narration in spoken and sign languages, 207–​222. Amsterdam: John Benjamins. Koulidobrova, Elena & Kathryn Davidson. 2015. Watch the attitude: Role-​shift and embedding in ASL. In Eva Csipak & Hedde Zeijlstra (eds.), Proceedings of Sinn und Bedeutung 19, 358–​376. Public Knowledge Project. Krifka, Manfred. 2020. Layers of the clause:  Propositions, judgements, commitments, acts. To appear in Jutta M. Hartmann & Angelika Wöllstein (eds.), Propositional arguments in cross-​ linguistic research: Theoretical and empirical issues. Tübingen: Narr. Lee, Robert G., Carol Neidle, Dawn MacLaughlin, Benjamin Bahan, & Judy Kegl. 1997. Role shift in ASL: A syntactic look at direct speech. In Carol Neidle, Dawn MacLaughlin, & Robert G. Lee (eds.), Syntactic structure and discourse function:  An examination of two constructions in American Sign Language (American Sign Language Linguistic Research Project Report 4), 24–​45. Boston: Boston University Press.

375

376

Markus Steinbach Liddell, Scott & Melanie Metzger. 1998. Gesture in sign language discourse. Journal of Pragmatics 30(6). 657–​697. Lillo-​Martin, Diane. 1995. The point of view predicate in American Sign Language. In Karen Emmorey & Judy Reilly (eds.), Language, gesture, and space, 155–​ 170. Hillsdale, NJ: Lawrence Erlbaum. Lillo-​Martin, Diane. 2012. Utterance reports and constructed action. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 365–​387. Berlin: Mouton de Gruyter. Lillo-​Martin, Diane & Richard P. Meier. 2011. On the linguistic status of ‘agreement’ in sign languages. Theoretical Linguistics 37(3/​4). 95–​141. Lillo-​Martin, Diane & Ronice Müller de Quadros. 2011. Acquisition of the syntax–​discourse interface: The expression of point of view. Lingua 121(4). 623–​636. Maier, Emar. 2014. Mixed quotation: The grammar of apparently transparent opacity. Semantics & Pragmatics 7(7). 1–​67. Maier, Emar. 2015. Quotation and unquotation in free indirect discourse. Mind & Language 30(3). 345–​373. Maier, Emar. 2016. A plea against monsters. Grazer Philosophische Studien 93. 363–​395. Maier, Emar. 2017. The pragmatics of attraction: Explaining unquotation in direct and free indirect discourse. In Paul Saka & Michael Johnson (eds.), The semantics and pragmatics of quotation, 259–​280. Berlin: Springer. Maier, Emar. 2018. Quotation, demonstration, and attraction in sign language role shift. Theoretical Linguistics 44(3/​4). 165–​176. Maier, Emar. 2019. Picturing words: The semantics of speech balloons. In Julian J. Schlöder, Dean McHugh, & Floris Roelofsen (eds.), Proceedings of the 22nd Amsterdam Colloquium, 584–​592. Amsterdam: ILLC. Maier, Emar. 2020. Shifting perspectives in pictorial narratives. To appear in Uli Sauerland & Stephanie Solt (eds.), Proceedings of Sinn und Bedeutung 23. Meir, Irit, Carol A. Padden, Mark Aronoff, & Wendy Sandler. 2007. Body as subject. Journal of Linguistics 43(3). 531–​563. Metzger, Melanie. 1995. Constructed dialogue and constructed action in ASL. In Ceil Lucas (ed.), Sociolinguistics in Deaf communities, 255–​271. Washington, DC: Gallaudet University Press. Padden, Carol A. 1983. Interaction of morphology and syntax in American Sign Language. San Diego, CA:  University of California, San Diego dissertation [Published 1988 by Garland Outstanding Dissertations in Linguistics, New York]. Padden, Carol A. 1986. Verbs and role-​shifting in American Sign Language. In Carol A. Padden (ed.), Proceedings of the Fourth National Symposium on Sign Language Research and Teaching, 44–​57. Silver Spring, MD: National Association of the Deaf. Pfau, Roland & Josep Quer. 2010. Nonmanuals: Their grammatical and prosodic roles. In Diane Brentari (ed.), Sign languages, 381–​402. Cambridge: Cambridge University Press. Pfau, Roland, Martin Salzmann, & Markus Steinbach. 2018. The syntax of sign language agreement: Common ingredients, but unusual recipe. Glossa 3(1): 107. 1–​46. Pfau, Roland & Markus Steinbach. 2011. Grammaticalization in sign languages. In Bernd Heine & Heiko Narrog (eds.), Handbook of grammaticalization, 681–​693. Oxford: Oxford University Press.​ Pfau, Roland & Markus Steinbach. 2016. Complex sentences in sign languages: Modality –​typology –​discourse. In Roland Pfau, Markus Steinbach, & Annika Herrmann (eds.), A matter of complexity. Subordination in sign languages, 1–​35. Berlin: de Gruyter Mouton and Ishara Press. Pfau, Roland, Markus Steinbach, & Bencie Woll. 2012. Tense, aspect, and modality. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 186–​ 204. Berlin: Mouton de Gruyter. Poulin, Christine & Christopher Miller. 1995. On narrative discourse and point of view in Quebec Sign Language. In Karen Emmorey & Judy Reilly (eds.), Language, gesture, and space, 117–​131. Hillsdale, NJ: Lawrence Erlbaum. Pyers, Jennie & Ann Senghas. 2007. Reported action in Nicaraguan and American Sign Languages: Emerging versus established systems. In Pamela Perniss, Roland Pfau, & Markus Steinbach (eds.), Visible variation. Comparative studies on sign language structure, 279–​302. Berlin: Mouton de Gruyter.

376

377

Role shift Quer, Josep. 2005. Context shift and indexical variables in sign languages. In Effi Georgala & Jonathan Howell (eds.), Proceedings of Semantics and Linguistics Theory (SALT) 15, 152–​168. Ithaca, NY: CLC Publications. Quer, Josep. 2011. Reporting and quoting in signed discourse. In Elke Brendel, Jörg Meibauer, & Markus Steinbach (eds.), Understanding quotation, 277–​302. Berlin: Mouton de Gruyter. Quer, Josep. 2013. Attitude ascriptions in sign languages and role shift. Proceedings of the Texas Linguistic Society (TLS) 13. 12–​28. Quer, Josep, 2016. Reporting with and without role shift: Sign language strategies of complementation. In Roland Pfau, Markus Steinbach, & Annika Herrmann (eds.), A matter of complexity. Subordination in sign languages, 204–​230. Berlin: de Gruyter Mouton and Ishara Press. Quer, Josep. 2018. On categorizing types of role shift in sign languages. Theoretical Linguistics 44(3/​ 4). 277–​282. Quer, Josep, Carlo Cecchetto, Caterina Donati, Carlo Geraci, Meltem Kelepir, Roland Pfau, & Markus Steinbach. 2017. SignGram Blueprint. A  guide to sign language grammar writing. Berlin: de Gruyter Mouton. Quinto-​Pozos, David. 2007. Can constructed action be considered obligatory? Lingua 117(7). 1285–​1314. Rice, Karen D. 1986. Some remarks on direct and indirect speech in Slave (Northern Athapaskan). In Florian Coulmas (ed.), Direct and indirect Speech, 47–​76. Berlin: Mouton. Schembri, Adam, Caroline Jones, & Denis Burnham. 2005. Comparing action gestures and classifier verbs of motion: Evidence from Australian Sign Language, Taiwan Sign Language, and nonsigners’ gestures without speech. Journal of Deaf Studies and Deaf Education 10(3). 272–​290. Schlenker, Philippe. 2003. A plea for monsters. Linguistics and Philosophy 26(1). 29–​120. Schlenker, Philippe. 2004. Context of thought and context of utterance. A note on free indirect discourse and the historical present. Mind & Language 19(3). 279–​304. Schlenker, Philippe. 2017a. Super monsters I:  Attitude and action role shift in sign language. Semantics and Pragmatics 10(9). 1–​65. Schlenker, Philippe. 2017b. Super monsters II: Role shift, iconicity and quotation in sign language. Semantics and Pragmatics 10(12). 1–​67. Schlenker, Philippe. 2018. Visible meaning. Sign languages and the foundations of semantics. Theoretical Linguistics 44(3/​4). 123–​208. Sharvit, Yael. 2008. The puzzle of free indirect discourse. Linguistics and Philosophy 31(3). 353–​395. Speas, Margaret & Carol Tenny. 2003. Configurational properties of point of view roles. In Anna Maria Di Sciullo (ed.). Asymmetry in grammar, 315–​344. Amsterdam: John Benjamins. Spronck, Stef & Tatiana Nikitina. 2019. Reported speech forms a dedicated syntactic domain. Linguistic Typology 23(1). 119–​159. Stec, Kashmiri, Mike Huiskes, & Gisela Redeker. 2016. Multimodal quotation: Role shift practices in spoken narratives. Journal of Pragmatics 104. 1–​17. Streeck, Jürgen. 2002. Grammars, words, and embodied meanings: On the uses and evolution of so and like. Journal of Communication 52(3). 581–​596. Tang, Gladys & Prudence Lau. 2012. Coordination and subordination. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 340–​365. Berlin: Mouton de Gruyter. van Gijn, Ingeborg. 2004. The quest for syntactic dependency. Sentential complementation in Sign Language of the Netherlands. Utrecht: LOT. van Loon, Esther, Roland Pfau, & Markus Steinbach. 2014. The grammaticalization of gestures in sign languages. In Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill, & Jana Bressem (eds.), Body –​language –​communication: An international handbook on multimodality in human interaction. Vol. 2, 2133–2149. Berlin: de Gruyter Mouton. Wilbur, Ronnie B. & Cynthia G. Patschke 1999. Syntactic correlates of brow raise in ASL. Sign Language & Linguistics 2(1). 3–​30. Zucchi, Sandro. 2004. Monsters in the visual mode? Unpublished manuscript, Università degli Studi di Milano. Zucchi, Sandro. 2011. Event descriptions and classifier predicates. Paper presented at the conference Formal and Experimental Advances in Sign Language Theory (FEAST), Venice, June 20–​22. Zwitserlood, Inge. 2012. Classifiers. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 158–​186. Berlin: Mouton de Gruyter.

377

378

17 USE OF SIGN SPACE Experimental perspectives Pamela Perniss

17.1  Introduction The term sign space refers to the space in front of and around the signer’s body in which signing happens. As such, all of signing uses sign space. The phrase use of sign space typically refers to the meaningful use of space. When the use of space is meaningful, the placement and movement of the hands in space is meaningful. This means that location and motion information is meaningful, rather than being merely contrastive as phonological parameters. Many sign types, including pronouns and different kinds of predicates, rely on the meaningful use of space, determined through referent-​location associations that range from fulfilling what has been called a purely referential, ‘syntactic’ function to being topographically motivated (Klima & Bellugi 1979; Poizner et al. 1987). The linguistic analysis of signs that rely on the meaningful use of space has been the focus of a considerable amount of research (for overviews, see, e.g., Cormier (2012) on pronouns; Mathur & Rathmann (2012) on verb agreement; Zwitserlood (2012) on classifiers). In addition, researchers have explored the use of space from an experimental perspective, and have combined quantitative and qualitative approaches to understand how the use of space structures different domains of expression. Questions that have arisen from the special properties of space in sign language include: what is the nature of processing of linguistic structure that relies on the use of space? What is the linguistic status of signs that use space in a meaningful way? What are the neural correlates of the use of space in sign language processing? How do properties of the visual modality structure and influence acquisition of locative expression in sign languages? What is the relationship between spatial language and spatial cognition in signers? This chapter provides an overview of the research that has aimed to answer these questions. Section 17.2 briefly describes the sign types and devices typically associated with the meaningful use of space, and Section 17.3 then outlines the main questions that have given rise to experimental investigation. The linguistic processing of the meaningful use of space is the topic of Section 17.4, reviewing evidence from both behavioral and neuroimaging studies. Section 17.5 deals with locative expressions and their acquisition by children and adults learning to sign, and Section 17.6 discusses the use of viewpoint and different spatial formats or perspectives in signing. Finally, Section 17.7 addresses 378

379

Use of sign space

the relationship between spatial language and spatial cognition, and a short conclusion is provided in Section 17.8.

17.2  Overview of the use of space and associated sign types Essentially all aspects of linguistic structure in sign languages rely on the use of space in some way. The space in front of the body is the large, visible, three-​dimensional articulatory space in which the hands create meaning. Any individual sign is formed with the hands in space, based on the formational parameters of hand configuration, location, and movement (see van der Hulst & van der Kooij, Chapter 1). In discourse, that is, in connected signing, the association of referents with locations in space is an important way of conveying grammatical and semantic information, marking referential distinctions, and conveying locative information. Traditionally, two main functions of space have been assumed: referential and topographic (Klima & Bellugi 1979; Liddell 1990). The referential use of space has also been called syntactic, arbitrary or abstract; the term abstract will be adopted in this chapter (see Johnston & Schembri 2007). These two functions of space and the sign types most prominently associated with them are at the center of attention in this chapter and are described briefly below.

17.2.1  Abstract use of space In sentential and discourse contexts, referents get associated with locations in space, often called referential loci or R-​loci (Lillo-​Martin & Klima 1990). Pointing signs can be directed to these loci to achieve pronominal reference. To use a traditional example, a signer could sign the nominal BOY followed by a point (i.e., IX for ‘index’) to a location on their right (i.e., BOY IXright), and then sign the nominal GIRL followed by a point to a location on their left (i.e., GIRL IXleft). The association of the nominal signs with these specific locations in signing space (on the right and the left) means that subsequent points to the locations (i.e., pronouns directed toward these locations) establish co-​reference with the referents of the nominal signs. Thus, a subsequent utterance of the form IXright LIKE IXleft would mean ‘he likes her’, whereas IXleft LIKE IXright would mean ‘she likes him’. Similarly, certain types of verbs, traditionally called agreement verbs (Padden 1990), and more recently indicating verbs (Liddell 2000; Johnston & Schembri 2007), the term which will be adopted here, can be directed towards or move between referential loci in space in order to indicate (animate) subject and object arguments. In many sign languages, the sign for ‘ask’ is an indicating verb. The start and end locations of the movement of the verb are associated with the subject and object arguments, respectively. Thus, in continued discourse, using the example above, a sign of the form rightASKleft would mean ‘he asked her’. The sign ASK moves from the location associated with the nominal BOY on the right to the location associated with the nominal GIRL on the left (see Quer, Chapter 5, and Hosemann, Chapter 6, for discussion).

17.2.2  Topographic use of space The abstract use of space is typically opposed to the topographic use of space. When space is used topographically, spatial information is conveyed. The main sign types associated with the topographic use of space are spatial verbs (Padden 1990) and classifier predicates (e.g., Valli & Lucas 1995), more recently called depicting verbs (Liddell (2003); see Schembri (2003) for an overview of other terminology –​this chapter will 379

380

Pamela Perniss

use both terms). Classifier predicates are morphologically complex predicates that are the primary means for encoding spatial relationships in sign languages (Supalla 1982; Zwitserlood 2012; also see Tang et  al., Chapter  7).1 The handshape is the classifier element in these forms  –​it encodes referent type by classifying objects in terms of salient features, predominantly size and shape features. For example, a flat hand ( -​ hand) represents flat objects like a table or a book; an extended index finger ( -​hand) represents long, thin objects like a pen or a toothbrush; curving the fingers and thumb into a rounded handshape ( -​hand) represents objects like a cup or a tube. The location or movement of the hand in sign space encodes information about the location or motion of the referent. Thus, the placement of a -​hand on a -​hand can encode the spatial relationship ‘pen is on table’. Similarly, the meaning ‘cup falls off table’ can be encoded by moving a -​hand off and down from a -​hand. Spatial verbs, like MOVE or PUT, mark locative arguments through their movement in space. The end location of the verb PUT, for example, associated with the location of a table, specifies where something gets put. Similarly, the start and end locations of the verb MOVE can encode the movement of an object from one location to another (e.g., from the table to the shelf).

17.2.3  Overlap between abstract and topographic use of space While spatial verbs are typically associated with a topographic use of space (the term is sometimes used almost synonymously with classifier predicates; see, e.g., Morgan & Woll (2007)), the locative arguments conveyed by spatial verbs need not necessarily be motivated in a topographic, iconic way (i.e., by corresponding to a real-​world or imagined spatial scene, as is characteristic of the use of classifier predicates). For example, a signer could move the sign MOVE between two locations in space associated with the cities London and Amsterdam. If London is associated with a location on the right and Amsterdam with a location on the left, signing rightMOVEleft would mean ‘move from London to Amsterdam’, while signing leftMOVEright would mean ‘move from Amsterdam to London’. The locations of London and Amsterdam in the signing space need not correspond to the real-​world locations of the two cities (neither in absolute directional terms nor based on a two-​ dimensional map projection). In this sense, spatial verbs straddle the distinction between the referential and topographic uses of space.2 Some researchers have pointed out that the choice of location in space is rarely arbitrary, even when location expresses non-​locative information; see, for instance, van Hoek (1992), who discusses the conceptual motivation of location choice, and Engberg-​Pedersen (1993), who proposes various semantic-​pragmatic conventions that motivate location choice. This is supported by a recent corpus analysis of indicating verbs in British Sign Language (BSL): these verbs were found to favor a motivated use of space, only rarely occurring with an abstract, or purely referential, use of space (Cormier et  al. 2015).3 A different perspective on the idea that the use of space is not fully arbitrary is provided by Geraci (2014), who proposes that a “spatialization” process maps argument structure to default locations in sign space, with the subject argument mapped ipsilaterally and the object argument mapped contralaterally.4 Finally, the two functions –​abstract and topographic –​are not mutually exclusive (Liddell 1990; Emmorey et al. 1995; see also Perniss 2012). As discussed again below, the same location can function both abstractly and topographically within a stretch of discourse.

380

381

Use of sign space

17.2.4  Analysis of signs that use space meaningfully A main debate in the literature has concerned the linguistic analysis of sign types that rely on spatial modification. The nature of this debate will be outlined here, as it is relevant to a substantial portion of the research that will be discussed in this chapter. Central to the debate are classifier predicates or depicting verbs (see Tang et al., Chapter 7) Early analyses considered these signs to be motivated by visual imagery, with analogue mapping rules allowing a mimetic depiction of a spatial scene in the sign space (DeMatteo 1977; Mandel 1977). Under this analysis, these predicates do not conform to the requirements of a linguistic system as characterized by discrete, combinatorial elements. In keeping with the desideratum of the time, that is, to prove the fully linguistic status of sign languages, an analysis was proposed which held classifier predicates to be composed entirely of discrete morphemes that combine in language-​specific ways and with individual predicates reaching a potential complexity of tens of morphemes (Supalla 1978, 1986; Newport & Supalla 1980). While these authors recognized that the visual modality “offers the potential for an analogue, rather than a discrete, representational system” (Newport & Supalla 1980: 191), they did not believe that American Sign Language (ASL)  –​nor presumably any other sign language  –​took advantage of this potential. The middle ground subsequently proposed by Liddell (1996, 2003) took the influence of the visual modality on the nature of linguistic structure in sign languages into serious consideration. Liddell questioned whether aspects of a predicate’s form that related to the use of space, specifically representing location and motion information, could be entirely linguistically specified. Liddell highlighted the ‘listability problem’ faced by a strictly morphological analysis, suggesting that it would be impossible to list all morphemic components of a fully categorical, linguistically specified predicate. Liddell’s solution to the listability problem came in the form of a hybrid analysis: signs that use space to encode location and motion are hybrid structures composed of both morphemic and non-​morphemic (or ‘gestural’) elements.5 Specifically, Liddell (2003) suggested that handshape (i.e., the classifier of depicting verbs) and some movement patterns (e.g., a bouncing movement to encode the meaning ‘casually’) are linguistically specified, while the choice of location and motion trajectories is gestural in nature. The choice of location depends fundamentally on conceptual representation and the way that a spatial scene actually looks, and is not morphemically specified.6 While Liddell focused primarily on depicting verbs, his analysis extends to all signs that use space in a meaningful way, also including indicating verbs and (pronominal) pointing signs (for similarly motivated analyses, see Johnston & Schembri (2007) and Schembri et al. (2018) for indicating verbs, and Cormier et al. (2013) and Johnston (2013) for pointing; see, however, Pfau et al. (2018) for a purely syntactic/​linguistic analysis of agreeing verbs, while taking the gestural origins of sign language into account). As indicating verbs and pointing signs may be directed at locations motivated in a topographic way or be ‘purely referential’, it is clear that Liddell’s analysis is consistent with abstract and topographic uses of space not being mutually exclusive, or there being overlap between them. For all these signs, it can also be seen that location choice is based on real or imagined locations of referents or on semantic-​pragmatic conventions (Engberg-​Pedersen 1993), and in a way that is similarly available to and used by signers and gesturers alike. Similarly to the “modality-​specific recipe” for the syntactic analysis of sign language agreement proposed by Pfau et  al. (2018), the analyses of classifier predicates offered 381

382

Pamela Perniss

by Geraci (2009) and Davidson (2015) are formal accounts, that is, fully linguistic analyses, while accommodating analogue components. Davidson (2015) argues that classifier predicates are semantically light verbs that require a demonstrational argument (similar to the analysis of English be like; see Tang et al., Chapter 7, for details). In the account by Geraci (2009), classifier predicates get their demonstrational, real-​world features through movement epenthesis. Classifier predicates are assumed to be unspecified for movement, but because signs need movement to be phonologically well-​formed, and the semantic component of classifier predicates requires a demonstration, the demonstration is inserted as movement by copying movement features from the real world. The notion of copying features from the real world is reminiscent of the Semiological Model in the French tradition of sign language research (see Cuxac 1999; Cuxac & Sallandre 2007), whereby the structure of classifier constructions (depicting verbs) is accounted for in terms of transfer units. These units –​consisting in transfer of size and shape, situational transfer, and personal transfer –​are transferred from experience and mental imagery and form highly iconic, complex compositional structures.

17.3  Research questions and debates arising from the use of space The use of space for the different mapping functions described above has given rise to a wide range of research questions that have been experimentally investigated, covering linguistic and non-​linguistic processing as well as the neural correlates of signing that relies on spatial modification. Questions regarding linguistic processing have addressed, for example, whether pronominal pointing signs activate both nominal and locative referents (Emmorey & Falgier 2004). Another area of investigation has arisen from the debate around the composition and analysis of sign types that can be spatially modified. In particular, the linguistic status of location has been experimentally investigated, with studies looking at whether or not location is processed categorically (Emmorey & Herzig 2003) and whether there is evidence for differential neural correlates of language processing involving the topographic use of space (Emmorey et al. 1995). The idea that sign languages may use forms that combine linguistic and non-​linguistic (or gestural) elements, and that signers and non-​signers may use space for communicative expression in similar ways (Liddell 1996), was instrumental in establishing a new field of research, namely comparative investigation between sign and gesture (both co-​speech gesture and silent gesture). Historically, comparisons between sign and gesture were perceived as risky, because they carried with them the burden of the suggestion that sign and gesture were more similar than sign and speech, and thus that sign languages were not full-​fledged languages. Current understanding allows a more critical approach that investigates the similarities and differences between sign, gesture, and speech. Research directly comparing between sign and gesture has looked at location and motion encoding (e.g., Schembri et al. 2005), and related domains relevant to the use of space, including co-​reference (e.g., So et al. 2005; Perniss & Özyürek 2015) and the use of the body and space in event descriptions (e.g., Quinto-​Pozos & Parrill 2015) (for processing of sign vs. gesture, see also Woll & Vinson, Chapter 25). Finally, the nature of the visual modality affords highly iconic and spatially motivated representation. These affordances promote shared systems and structures across sign languages, including the system of classifier predicates (or depicting verbs), the use of different spatial perspectives (observer and character) to project an event space onto 382

383

Use of sign space

signing space, and the viewpoint (signer or addressee) from which spatial information is represented. The use of space to talk about space affords these systems and structures, and sign languages look overall very similar to each other in these domains (though see research on emerging sign languages (e.g., Meir et al. 2010), on rural sign languages (see de Vos & Pfau (2015) for a review), and cross-​linguistic comparative research (e.g., Perniss et al. 2015) for differences). The extent to which similarities or differences exist between sign languages, and investigation of the reasons for modality-​driven similarities where they do exist are important and interesting areas of research. These questions extend to the acquisition of spatial language. How do the iconic affordances of the visual modality impact acquisition of spatial language, and how can adult learners draw on their gestural repertoire in learning to express locative information in a sign language?

17.4  Linguistic processing of referent-​location associations 17.4.1  Co-​reference processing A series of studies by Emmorey and colleagues has investigated the processing of referent-​ location associations in sign space using probe recognition tasks. In the probe recognition paradigm, participants are asked to decide whether a probe word appeared in a previously presented sentence or segment of discourse, or whether a probe word is a real word (lexical decision). Probe recognition is faster for probes that are antecedent nouns of pronouns compared to other nouns, providing evidence for antecedent noun (re-​)activation in co-​reference processing of an anaphoric element. For ASL, this was found to be the case for both overt (Emmorey, Norman, & O’Grady 1991) and null pronouns (i.e., in agreement/​directional verbs; Emmorey & Lillo-​Martin 1995). The probe recognition paradigm has also been used to investigate more semantically rich use of space. Emmorey & Falgier (2004) were interested in the effects of ‘locus doubling’ on the processing of pronominal reference. Participants watched short narrative sequences in ASL, as illustrated in example (1), in which a nominal referent (the mother) was associated with two different locations (left and right) in sign space, corresponding to two distinct spatial settings (store and kitchen) for the referent within the discourse. (1) 

MY WONDERFUL MOTHER B-​U-​S-​Y, WENT-​TOleft STOREleft BUY[iterative] FOOD, FINISH.

KITCHENright PREPARE. ‘My wonderful mother is very busy. She went to the store and shopped for food. Then she brought it to her kitchen where she prepared it.’                           (ASL, Emmorey & Falgier 2004: 324) leftBRINGright

Continuation sentences included a pronoun directed to the location of the first spatial setting (i.e., to the left, the location of the store in this example). Participants were required to make lexical decisions to probe signs that appeared toward the end of the continuation sentence (the ‘^’ indicates the appearance of the probe sign). (2) 

HAPPEN PRONOUNleft FORGOT BU^Y ONION.

‘As it happens, she forgot to buy onion (while she was at the store).’               (ASL, Emmorey & Falgier 2004: 324) Of critical interest was whether the pronoun would activate both the nominal referent (i.e., the mother in the example) and the associated spatial location (i.e., the store at 383

384

Pamela Perniss

which the mother forgot to buy the onion). To test this, the lexical probe signs were the signs for the nominal referent (e.g., MOTHER) and the two locations (e.g., STORE, KITCHEN). Compared to a no-​pronoun control condition, in which the continuation sentence did not contain a pronoun, participants were faster to make decisions for referent probes (e.g., MOTHER) in the pronoun condition. In contrast, response times did not differ across conditions for location probes. Emmorey & Falgier (2004) concluded that the increased semantic richness of sign space, as evidenced by the possibility of locus doubling, does not lead to a modality effect in antecedent activation. That is, the results suggest that antecedent activation is similar in spoken and signed language. While the form of these pronouns conveys spatial information and the pronouns are certainly understood with respect to this spatial information (i.e., the conceptual location of the referent; here: the mother, when she was at the store), ASL pronouns in this study activated only their antecedent referent nouns, and not also the nouns associated with the spatial location. However, Emmorey & Falgier (2004) note that evidence of activation of both nouns might have been found if the probe had been presented immediately following the pronoun, rather than after a verb (i.e., FORGET), which favored activation of the person noun over the location noun.

17.4.2  Processing of topographic vs. abstract use of space 17.4.2.1  Behavioral evidence A central question with respect to the topographic vs. syntactic/​abstract functions of space is whether signers process these types of referent-​location associations differentially. Emmorey et al. (1995) have provided evidence that this is the case, also using a probe recognition task. Sentences were used in which referents were associated with locations on the right, left, and center of sign space, respectively. In topographic sentences, the locations corresponded to an actual physical layout, for instance, the locations of items on a vanity table (perfume, nail polish, blusher). In the non-​topographic sentences, locations were associated with referents arbitrarily, fulfilling a purely referential function, for instance, utility bills that needed to be paid (gas, water, electricity). The probe, which appeared directly after each sentence, consisted of a nominal sign (i.e., for one of the referents in experimental trials) executed simultaneously with an index sign (on the non-​ dominant hand) directed at a location that was either congruent or incongruent with the referent-​location association used in the sentence. Participants’ task was to decide whether the probe sign had appeared in the sentence, regardless of the form (location) of the index sign. In the topographic, but not in the abstract (referential) condition, participants were significantly slower to make a decision about the probe sign when the location pointed to was incongruent with that of the sentence. That is, signers exhibited interference effects for incongruent points only when spatial location was highly relevant semantically, suggesting different processing –​or what the authors call different mental representations –​for topographic and abstract spatial functions (see also Emmorey et al. 1991). Additional support for the differential processing of the two functions of space comes from a memory task (Emmorey et  al. 1995). Memory for spatial locations that encoded spatial, topographical information was better than memory for locations that functioned for grammatical purposes only (see also review in Perniss (2012)). Emmorey et  al. (1995) suggested that topographically associated spatial locations are encoded in memory as part of a semantic representation, while locations that function only to 384

385

Use of sign space

convey grammatical information need not be retained in memory. It may be, however, that memory would improve for spatial loci if their referential function were relevant for a longer stretch of discourse. In the memory study by Emmorey et al. (1995), the referential loci in the memory task did not perform heavy duty in the sense of maintaining a referent-​location association over time. Instead, a single sentence was used, which was then repeated either with changed spatial loci or changed lexical items.

17.4.2.2  Neuroimaging evidence In addition to behavioral evidence, a difference in processing of topographic and referential space has been found in neuroimaging studies, including studies with brain-​lesioned signers (e.g., Poizner et al. 1987; Hickok et al. 1996; Emmorey 1997; Neville et al. 1998; Emmorey et al. 2002; MacSweeney et al. 2002; Atkinson et al. 2003; Atkinson et al. 2005; Capek et al. 2009; Courtin et al. 2010). One question that arises from the analogue spatial nature of topographic space is whether this use of space recruits the right hemisphere, associated with processing of visual-​spatial information. Right-​hemisphere involvement in processing of topographic space as contrasted with left-​hemisphere processing of non-​topographic use of space would point to differential neural organization for the two functions of space in sign language and to distinct types of representation (i.e., linguistic vs. spatial). Hickok et al. (1996) found evidence for this in a study of two deaf signers, one of whom had suffered right-​hemisphere damage (RHD), and the other of whom had suffered left-​hemisphere damage (LHD). On tasks that tested abilities related to abstract vs. topographic spatial mapping in both comprehension and production, the two signers exhibited complementary processing abilities. The RHD patient showed impaired performance on tasks that required processing of topographic spatial relations, but no impairment for processing spatial information that encoded only grammatical relations. The opposite was the case for the LHD patient: impairment was observed in the processing of referential use of space, but topographic spatial processing abilities remained. Subsequent studies have further illuminated the nature of the dissociation of function, and have pointed to a more nuanced picture. There are different ways in which spatial information can be expressed topographically. In the behavioral studies by Emmorey and colleagues cited above, pointing signs were used to associate referents with topographically motivated locations in sign space. The tasks used in Hickok et al. (1996) involved the use of classifier predicates, for example, to provide an analogue spatial representation of a room layout. Another window into hemispheric involvement in the processing of spatial language has been to compare the use of classifier predicates (depicting verbs) to represent spatial relationships with the use of relational lexemes like ‘in’, ‘on’, and ‘under’. Relational lexemes are categorical, lexical signs that are iconic of the spatial relationship they express, but do not make use of analogue coding possibilities in the same way as classifier predicates allow. For example, in contrast to classifier predicates, relational lexemes do not encode semantically specific information about the objects involved in a spatial relationship (e.g., rounded object on flat object for ‘cup on table’) or about the specific nature of the spatial relationship (e.g., representing the cup as being in the middle, left, or right side of the table). Greater impairment was found for processing spatial information expressed by relational lexemes than by classifier predicates in signers with LHD in comprehension (Emmorey 1997; Atkinson et  al. 2005; see also Emmorey 2002) and production (Emmorey et  al.

385

386

Pamela Perniss

2002; Emmorey et  al. 2005). For example, signers exhibited more difficulty matching relational lexemes (e.g., ‘in’) to pictures (e.g., toothbrush in cup) than matching classifier constructions depicting the locative relationship to pictures. This points to clear engagement of the right hemisphere in processing topographic spatial relations, but not to the lack of engagement of the left hemisphere for these structures. Processing of classifiers was still impaired when the left hemisphere was damaged (Atkinson et al. 2005), consistent with findings for left parietal activation during comprehension of BSL sentences using topographic space by MacSweeney et al. (2002). As Atkinson et al. (2005: 247) state “[spatial language] may call upon both the linguistic skills of the left hemisphere and specialized spatial processes of the right”. However, Atkinson et  al. (2005) found that the “only detectable” language-​related problem for signers with RHD related to comprehending topographic use of space (echoing findings by Hickok et al. (1996), but using more comparable tasks to assess topographic and non-​ topographic functions); other problems exhibited by RHD signers all related to non-​ linguistic visuo-​spatial skills (also see Woll & Vinson, Chapter  25). This suggests that the use of topographic space recruits the non-​linguistic visuo-​spatial skills of the right hemisphere –​i.e., skills that allow motivated, non-​arbitrary locations in sign space to be mapped onto real (or imagined) locations in the world (see also Atkinson et al. 2003). This is further supported by findings by Courtin et al. (2010) for French Sign Language (LSF), who found right-​hemisphere involvement in LSF narratives that relied on a high degree of visuo-​spatial encoding and memory for topographic locations. Neuroimaging studies have also looked at the processing of spatial locations used in indicating verbs (also see Hosemann, Chapter 6). Capek et al. (2009) used ERP to investigate processing of syntactic violations in the use of subject and object locations with spatially modifiable verbs like ASK. Participants viewed sentences that they were asked to rate as ‘good’ or ‘bad’. The ‘bad’ sentences contained syntactic anomalies based on movement of the verb being reversed (between subject and object) or using an unspecified location in space that had not been previously associated with a referent. An example of a syntactically correct sentence, a reversed sentence, and an unspecified sentence used in the Capek et al. (2009) study is given below: (3) 

BOY IXright ENAMORED GIRL IXleft, BOY IXright

_​_​_​_​_​_​ GIRL DATEleft. Correct: rightASKleft Reversed: *leftASKright Unspecified: *leftASKsigner ‘The boy was enamored with the girl, the boy asked the girl for a date.’             (ASL, Capek et al. 2009: Supporting Appendix, p. 7)

The ERP results showed an early left anterior negativity (LAN) and a P600 for the reversed agreement violation, similar to what has been previously found for syntactic violations in a spoken language like English (and similar to findings for German Sign Language (DGS) by Hänel-​Faulhaber et al. (2014)). However, for the syntactic violations that used an unspecified, previously unassigned, location in space, results also revealed a right-​lateralized negativity. Dealing with the unspecified location –​as the authors note, “the viewer is forced to either posit a new referent whose identity is unknown or infer that the intended referent is one that was previously placed at a different spatial location” (Capek et al. 2009: 8787) –​seems to place different demands on the processing system, and such that the spatial syntax exhibited in the visual modality recruits regions of the 386

387

Use of sign space

brain not typically involved in syntactic processing in spoken language. The authors provide a word of caution with respect to assuming that the results necessarily provide evidence of sign language-​specific grammatical processing based on the nature of spatial syntax, depending on whether it might be possible to construct comparable syntactic violations for a spoken language. Nevertheless, Capek et al. (2009) interpret the greater right-​hemisphere activation in the unspecified (unassigned) location violation as related to spatial mapping demands (or the signer’s attempts at meaningful spatial mapping). This suggests an interesting relationship between right-​hemisphere involvement in processing of topographic uses of space that has yet to be further explored. Recent ERP studies by Hosemann et  al. (2018) with DGS signers provide a more differentiated view of these types of agreement violations, and help shed light on the nature of signers’ attempts at interpreting referent-​location associations. As shown in example (4), Hosemann et al. (2018) used preceding sentence contexts in which the signer (locus 1) and a third person referent (here, the signer’s father, locus 3a) are introduced. In the continuation sentence, the agreement verb moves from the signer’s to the father’s location in the match condition (4a) and from the signer’s to a previously unspecified location (locus 3b) in the mismatch condition (4b). Thus, the mismatch condition in Hosemann et al. (2018) is akin to the unspecified location condition in Capek et al. (2009). (4) a.  Match condition: MY FATHER IX3a SOCCER FAN. NEXT MATCH DATE 1INFORM3a

‘My father is a soccer fan. I will inform him about the date of the next match.’ b. Mismatch condition: MY FATHER IX3a SOCCER FAN. NEXT MATCH DATE 1INFORM3b

‘My father is a soccer fan. I will inform xxx about the date of the next match.’                         (DGS, Hosemann et al. 2018: 4) In contrast to previous studies (Capek et  al. 2009; Hänel-​ Faulhaber et  al. 2014), Hosemann et  al. (2018) found a right posterior positivity (followed by a left anterior effect). In discussing their results, the authors point out that signers may not necessarily interpret mismatching sentences as grammatically incorrect. For example, the use of a previously unspecified location could be cataphorically introducing a referent upcoming in subsequent discourse. There may also be pragmatically motivated or discourse structural reasons for associating a single referent with more than one location in space (see also Perniss 2007b; Quer 2011). Hosemann et al. (2018) thus interpret the posterior positivity as reflecting enhanced processing costs for updating the situational context based on the possibility of alternative interpretations and of different types of infelicities that are not necessarily syntactic violations.

17.4.3  Morphemic vs. analogue processing of location As discussed in Section 17.2, the status of location in sign space as representing a morphemic, categorical element vs. representing a non-​linguistic, analogue element has been the center of much debate in sign language linguistics. Experimental investigation has brought evidence to bear on this debate. In a series of experiments, Emmorey & Herzig (2003) provided evidence that signers treat location in a gradient fashion in both comprehension and production. Signers who were asked (i)  to describe pictures of spatial relationships using classifier constructions, (ii) to rate the acceptability of a classifier 387

388

Pamela Perniss

construction for describing a particular spatial scene, and (iii) to interpret location information provided in a classifier construction exhibited analogue performance across the board. This study provided important support for Liddell’s (2003) claims about the use of space in classifier constructions (or depicting verbs), suggesting that when space is used topographically, location is not interpreted in a categorical, morphemic way. Evidence that location is not linguistically specified for topographic spatial information corroborates the findings for right-​hemispheric processing of topographic use of space reviewed above. Additional evidence for neural substrates involved in spatial language comes from a production study of ASL classifier constructions by Emmorey et al. (2013). Previous studies had provided evidence for greater engagement of the right-​hemisphere for the production of classifier constructions compared to the production of lexical signs or sentences without classifiers (e.g., Emmorey et al. 2002; Atkinson et al. 2005; Emmorey et al. 2005; Hickok et al. 2009). Emmorey et al. (2013) went a step further to investigate regions in the brain activated by different components of classifier constructions. Stimuli that differed systematically with respect to object type, object location, and object motion were used in order to tease apart the neural activation involved in the production of the different components comprising classifier constructions. Consistent with predictions from linguistic analysis and behavioral experimental investigation, the authors predicted that increased right-​hemisphere involvement in classifier constructions might specifically reflect the analogue nature of topographic spatial encoding. Their findings supported this prediction, revealing bilateral activation for the location and motion components of classifier constructions –​i.e., for the components involved in creating topographic representation of spatial information. Specifically, results suggest that the production of spatial locations within classifier constructions engages the right parietal cortex, consistent with the recruitment of gradient, non-​linguistic visuo-​spatial mental representation. In contrast, the production of classifier handshapes (representing whole entities) and of lexical signs revealed activation in the left inferior frontal and left inferior temporal cortex, suggesting retrieval of these forms via traditional left-​hemisphere language regions.7

17.5  Use of space 17.5.1  Locative expression As the discussion of the topographic use of space above has made clear, spatial language in the visual modality is unique in being able to provide a highly iconic, isomorphic representation of a spatial scene. The affordance for iconic representation pertains not only to the locations of referents, and the relative distances between then, but also to featural representation of the referents themselves and to direct representation of the spatial relationship (through simultaneous referent representation). For example, the spatial relationship ‘cup on table’ could be expressed in a highly iconic manner by means of a simultaneous classifier construction, placing one hand (depicting the shape of a cup by curving the fingers and thumb into a rounded -​shape) onto the other hand (depicting the flat surface of the table by extending all the fingers and holding the flat hand horizontally), as illustrated in Figure 17.1 by an example from DGS. Classifier predicates (or depicting verbs) exist in essentially all sign languages studied to date, and the exploitation of iconic, topographic affordances of the visual modality has been considered to be a hallmark of spatial language across sign languages. Detailed cross-​linguistic investigation of the different forms available to signers for spatial expression, including investigation of the preference for use of different forms across signers, is lacking, however. 388

389

Use of sign space

Figure 17.1  DGS example of a simultaneous classifier construction used to encode the spatial relationship ‘cup on table’

Recently, Perniss et  al. (2015) provided evidence for language-​specific differences in spatial encoding through a detailed qualitative and quantitative comparison of DGS and Turkish Sign Language (TİD). The authors focused on the aspects of spatial scenes (already outlined above) that are particularly influenced by the affordances of the visual modality and thus assumed to be similar in spatial encoding across sign languages: entity representation, location representation, and spatial relationship representation. While both DGS and TİD looked quite similar overall (e.g., in a predominant preference for classifier predicates), there were notable differences that point to typological variation in locative expression across sign languages. Crucially, these differences also revealed that sign languages use devices that abstract away from semantically specific entity and location information. For example, the authors found that TİD signers used a form that was generic (non-​iconic) with respect to entity and location information (e.g. -​handshape with index, middle, and ring finger extended for three entities), but that highlighted the nature of the spatial relationship, in particular of entities in a next-​to relationship (see Figure 17.2; see also Özyürek et al. 2010).

Figure 17.2  TİD example of form encoding next-​to relationship in the pictured spatial scene showing three plates next to each other. The form ( -​hand for three entities) highlights the spatial relationship, while abstracting away from entity and location information in terms of iconic, semantically specific representation

389

390

Pamela Perniss

TİD signers used this form for entities of various shapes and sizes, and irrespective of the exact locations of entities (e.g., relative distance between objects). Thus, though location can be gradiently encoded, this potential for topographic representation is not always exploited. In general, locative devices may be more varied and be characterized by different degrees of semantic specificity. The similarity of encoding in the spatial domain across sign languages has traditionally stood in stark contrast to the diversity of spatial language in spoken languages. These findings suggest that more detailed cross-​linguistic comparison may reveal more differences between sign languages, and that the diversity of spatial language in the visual modality may be greater than previously thought.

17.5.2  Signing perspective and viewpoint Another important aspect of the use of space in the visual modality of sign language relates to the use of perspective and viewpoint. When space is used topographically, it is used to create a mapping of a real-​world (or imagined) spatial scene. There are two main ways that space can be structured in order to create this mapping. In one, the signer’s vantage point is external to the event space. This corresponds to observer perspective, where the signer is in the role of an observer looking onto the entire event space. In the other, the signer’s vantage point is within the event. This corresponds to character perspective, as the signer is in the role of a character or person in the event. Different terminology has been used in the literature for these two ways of structuring space. In addition to observer/​character perspective (Perniss 2007a; Quinto-​Pozos & Parrill 2015), other terms that have been used include depicting/​surrogate space (Liddell 2003) and diagrammatic/​viewer spatial format (Emmorey & Falgier 1999; Emmorey et  al. 2000). Spatial descriptions can, moreover, be given from an egocentric (i.e., the signer/​speaker’s own) viewpoint or from a non-​egocentric (e.g., the addressee’s) viewpoint (e.g., Emmorey & Tversky 2002; Pyers et al. 2015). For example, if we are standing on opposite sides of a table on which there is a pen and a piece of paper, I could describe the spatial relationship between the two objects as The pen is to the left of the paper from my own viewpoint, as I see the objects from where I am standing, but I would say The pen is to the right of the paper to describe the spatial relationship from your viewpoint of the objects on the table. Studies have shown that it is not uncommon for speakers to give spatial descriptions from their addressee’s viewpoint, based on social, cognitive, and communicative factors (e.g., Mainwaring et  al. (2003) for English and Japanese; Schober (1993) for English). This is not the case for signed descriptions, as has been observed across multiple sign languages, where spatial description is primarily from the signer’s point of view (Emmorey 1996; Emmorey et  al. 1998; Perniss 2007a; Sümer et  al. 2016). Pyers et al. (2015) investigated the conventionalization of viewpoint preference for left-​right relations in the visual modality.8 Sign-​naïve participants were taught signs/​gestures for simple objects (e.g., a flat hand for a piece of paper, an extended index finger for a pen). In a first experiment, Pyers et al. (2015) established that an egocentric viewpoint was preferred for both production and interpretation of spatial relationships expressed in the visual modality. In a second experiment, sign-​naïve participant pairs were asked to describe and interpret spatial relationships from a particular viewpoint (i.e., from the Producer’s or the Perceiver’s viewpoint). This meant that one member of the pair was required to adopt the dispreferred non-​egocentric viewpoint: when spatial scenes needed to be described and interpreted from the Producer’s viewpoint, the burden of 390

391

Use of sign space

the non-​egocentric viewpoint fell to the perceiver/​addressee; when scenes were described and interpreted from the Perceiver’s viewpoint, the gesture producer had to relinquish their egocentric viewpoint. The authors found that the cost of adopting a non-​egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-​right spatial relations. Sign perceivers routinely ‘transform’ the spatial representation presented in sign space in order to interpret viewpoint-​dependent left-​right relationships from the sign producer’s point of view (Emmorey et  al. 1998; Emmorey 2002; Perniss 2007a; Pyers et  al. 2015; Sümer 2015). The findings by Pyers et  al. (2015) suggest that sign perceivers may make use of non-​linguistic cognitive tools like motor embodiment to more easily adopt a non-​egocentric perspective. The suggestion that motor embodiment  –​specifically, a motor theory of sign perception –​may be an important mechanism involved in addressee interpretation of spatial scenes signed from the signer’s viewpoint has also been made by Emmorey et  al. (1998). In this study, signers first viewed a room layout, with the doorway marked and pieces of furniture arranged in a left-​right or front-​back configuration, and then viewed a signed spatial description of the room’s layout produced either from the signer’s viewpoint or the addressee’s viewpoint. In judging whether the room and the description matched, ASL signers showed better  –​that is, more accurate  –​comprehension when spatial descriptions were produced from the signer’s point of view compared to the addressee’s viewpoint. The authors suggest that this may be due to habitual exposure to the signer’s viewpoint, essentially constituting evidence of a language-​specific effect. The use of an egocentric viewpoint by signers in giving spatial descriptions is discussed in modality-​specific terms. As signing space is used to represent a mental image of physical space or imagined space, it is hypothesized that a ‘spatial mapping principle’ may drive an isomorphic mapping between signing space and a real or imagined space. The study by Pyers et al. (2015) provides support for this hypothesis in terms of cognitive load. As stated in Pyers et al. (2015), for the signer to adopt the addressee’s viewpoint would require generation of a mental image and spatial description that is in direct perceptual conflict with the spatial scene to be described, and their results seem to support the heavy cognitive demand of this strategy. In contrast, the addressee’s cognitive load in adopting a non-​egocentric perspective in interpreting spatial scenes may be reduced by the ability to internally simulate the signer’s production. Thus, in addition to reflecting habitual exposure, the results of Emmorey et al. (1998) would also reflect the convergence of viewpoint convention in the visual modality on the cognitively least demanding strategy. Research done on the spatial format or perspective from which a signer projects an event space onto the signing space has been looked at with respect to route descriptions (Emmorey & Falgier 1999; Emmorey et  al. 2000) and event descriptions (Perniss & Özyürek 2008; Özyürek & Perniss 2011; Quinto-​Pozos & Parrill 2015). For example, Emmorey & Falgier (1999) and Emmorey et al. (2000) found that ASL signers preferred diagrammatic space or observer perspective to describe spatial environments (e.g., the layout of a convention center or town). That is, signers preferred to map the whole environment onto the signing space, placing landmarks in their appropriate relative locations in a topographic representation of the scene itself. (This corresponds to providing a survey perspective of the space, a term used for spoken route descriptions, e.g., Taylor & Tversky 1996). However, signers also adopted a character perspective or viewer space format when giving route descriptions. Here, locations of landmarks are described as 391

392

Pamela Perniss

if the person is encountering them while physically walking through the environment (i.e., as if mapping out a route through the space, and thus also called route perspective, Taylor & Tversky (1996)).9 Emmorey et al. (2002) note a preference in signers for the use of diagrammatic space (or observer perspective) for expressing spatial relationships, especially complex spatial information. This highlights the relationship between type of event space projection and type of information encoded, in particular with respect to the use of classifiers. In syntactic terms, entity classifiers represent the subject of an intransitive verb, while handling classifiers represent the object of a transitive verb (with the subject mapped onto the body) (e.g., Benedicto & Brentari 2004; Meir et al. 2012); in semantic terms, entity classifiers represent the location and motion of an entity, while handling classifiers depict the manner of manual activity. This suggests a correspondence between the use of entity classifiers to depict location/​motion information from an observer perspective, on the one hand, and the use of handling classifiers to depict action information from a character perspective, on the other hand. Evidence for this comes from analyses of event narratives in sign languages (e.g., Perniss 2007b, 2007c; Özyürek & Perniss 2011; Janzen 2012) and between sign and gesture (Cormier et al. (2012) for BSL and English; Quinto-​Pozos & Parrill (2015) for ASL and English). For example, Quinto-​Pozos & Parrill (2015) found that signers predominantly used constructed action (character perspective) when gesturers used a CVPT (character viewpoint) strategy, and were more likely to use entity classifiers when gesturers used OVPT (observer viewpoint). Özyürek & Perniss (2011) found some evidence for typological differences in the use of space for expression of different types of information. Observer perspective was used for representation of entity location and motion by means of entity classifiers in both sign languages. However, TİD signers, but not DGS signers, also exhibited the use of handling classifiers for location representation in an observer perspective space (e.g., two referents holding a pan, located on the right and left sides of signing space).

17.6  The acquisition of spatial language in sign languages The acquisition of spatial language in sign languages is another area that has received considerable attention (see also Zwitserlood, Chapter 8). The use of space to talk about space raises the question of modality effects in the acquisition of spatial language. Does the iconicity of spatial language –​mapping a spatial scene in an analogue way to the signing space  –​facilitate the acquisition of spatial language for children learning to sign, or does the complex simultaneous and polycomponential nature of classifier constructions (the primary means for encoding of spatial information) hinder the acquisition of spatial language compared to spoken language acquisition? Most of the research to date has pointed to children acquiring spatial language in the visual-​spatial modality lagging behind their hearing peers (e.g., Newport & Supalla 1980; Newport & Meier 1985; Engberg-​Pedersen 2003; Slobin et al. 2003; Martin & Sera 2006; Tang et al. 2007; Morgan et al. 2008). Taken together, this research has looked at different types of spatial language, including location and motion encoding, and viewpoint-​dependent (e.g., left, right) and viewpoint-​independent (topological) relations (e.g., in, on, under). However, a lack of direct comparison between location and motion events or between viewpoint-​ dependent and topological relationships has made it difficult to make generalizations about the nature of spatial language acquisition in a sign language. In addition, for the most part, this research has not directly compared children acquiring a sign language 392

393

Use of sign space

with children acquiring a spoken language (see Martin & Sera (2006) for an exception), and research has not directly compared child language with adult target language (see Engberg-​Pedersen (2003) and Tang et al. (2007) for exceptions). Sümer (2015) provides a comprehensive analysis of the acquisition of TİD by signing children across domains, comparing acquisition of Turkish by hearing peers as well as comparing directly to adult language, using the same tasks (see also Sümer et al. 2013; Sümer et al. 2014). In acquiring adult-​like usage of forms encoding topological relations, language modality was not found to have an effect. Only for viewpoint-​dependent spatial relationships on the sagittal axis (i.e., front-​back relationships) was it found that deaf children learning TİD lagged behind hearing children learning Turkish. For left-​right relationships (lateral axis viewpoint-​dependent relationships), in contrast, the visual modality seemed to have a facilitating effect. TİD-​acquiring children producing locative expressions encoding left-​right information seemed able to rely on the direct mapping of a spatial scene onto sign space (or onto their body by means of body-​anchored relational lexemes for ‘left’ and ‘right’). Previous results by Martin & Sera (2006) have found that deaf children acquiring ASL struggle with the comprehension of left-​right spatial relationships. Together, these results suggest an asymmetry in comprehension and production of left-​right relationships, most likely related to the challenges of learning the conventions of viewpoint. This is consistent with findings by Pyers et al. (2015) that the inherent preference for an egocentric viewpoint (see also Piaget & Inhelder 1956; Clark 1973) must be overcome for comprehenders through learned conventionalization. The acquisition of spatial language is also interesting from the perspective of second-​ language (L2) learning of a sign language by hearing adults. Marshall & Morgan (2015) investigated how hearing learners’ existing experience with co-​ speech gesture might support acquisition of the classifier predicate system, specifically with respect to encoding entity type and entity location and orientation. Participants were asked to perform both a comprehension task and a production task. In the production task, which was completed by L2 learners and native signers of BSL, participants saw pairs of pictures featuring two or more objects in simple or complex spatial configurations and were asked to describe in BSL what change (i.e., of location and/​or orientation) had taken place with respect to the configuration of entities between the first and the second picture. In the comprehension task, which was completed by L2 learners of BSL and sign-​naïve English speakers, participants watched a signed spatial description and matched it to one of four available pictures. The comprehension task showed a high level of accuracy for both learners and sign-​naïve participants. The production task showed that L2 learners struggled primarily with handshape (the more categorical, linguistically conventionalized element), while there was little difficulty with location, which can be mapped in an iconic, topographic way between locations in the real world and locations in sign/​gesture space (see also Schembri et al. 2005). This contrasts with findings by Ferrara & Nilsson (2017), who asked second year university students enrolled in an interpreting program for Norwegian Sign Language to give route and building layout descriptions. Compared to their instructors, who were also asked to complete the task, the students struggled least with the handshape and more with the orientation, location, and movement parameters. In addition, students used fewer depicting signs overall and had difficulty coordinating their use of depicting signs in the signing space (the hands in relation to the body and in relation to each other), struggling in particular with the appropriate use of depicting signs in character perspective route descriptions (see Section 17.5.2). Direct comparison of the two studies is 393

394

Pamela Perniss

difficult, however; Marshall & Morgan (2015) focused on single elicited descriptions of location, while Ferrara & Nilsson (2017) investigated different types of depicting signs (expressing location, sign and shape, movement) in spontaneous descriptions. In addition, students in the Ferrara & Nilsson (2017) study had more intensive and targeted sign language instruction compared to the learner group in Marshall & Morgan (2015). While these studies focused on the use of depicting signs (classifier predicates), the longitudinal study by Boers-​Visker & van den Bogaerde (2019) encompasses all signs that rely on the use of space: pointing signs, spatially modified verbs (including indicating/​ agreement and depicting/​classifier verbs), and other signs modified for location (e.g., nominal and adjectival signs). The study is based on short interviews about everyday topics (family, work, hobbies) conducted with two students enrolled in the Sign Language of the Netherlands (NGT) interpreting program every ten weeks over the course of four years. The L2 data is compared with data from L1 NGT signers who engaged in the same task. Findings by Boers-​Visker & van den Bogaerde (2019) corroborate the findings of previous studies in showing that students find the use and correct production (with respect to both handshape and spatial coordination) of classifier/​depicting verbs challenging. However, the authors further note the largely appropriate use of pointing signs and agreement/​indicating verbs even in the early stages of L2 NGT acquisition, and suggest that learners’ experience with gestural use of space scaffolds learning in these domains. Overall, the findings on L2 acquisition of a sign language are consistent with other research that posits a gradient, “gestural” component of spatial language in sign language, though the research also makes clear the complex nature of the systems in question and the multitude of different elements that need to be coordinated in order to produce native-​like signing in the spatial domain.

17.7  Spatial language and spatial cognition A final domain of investigation has been the relationship between spatial language and spatial cognition, and more generally the effects of visual-​spatial language on non-​ linguistic processing. Sign language uses space to convey spatial and referential information, such that referent-​location associations and spatial layout information are part of linguistic processing. The reliance on space to convey topographic and grammatical information may affect non-​linguistic spatial skills related to mental imagery generation, maintenance, and transformation. These abilities were investigated by Emmorey et  al. (1993), who stated that “visual-​spatial perception, memory, and mental transformations are all prerequisites to grammatical processing in ASL, and also are central to visual mental imagery” (Emmorey et al. 1993: 140), and thus hypothesized that these mental imagery abilities are central to sign language production and comprehension. Many aspects of the use of space in sign language make it easy to see how non-​ linguistic visuo-​spatial abilities may be recruited in language processing. Recall that locations in space are assigned to discourse referents, e.g., associating ‘boy’ with a location on the right side of signing space and ‘girl’ with a location on the left (as in example (3) in Section 17.4.2.2). These referent-​location associations may remain in place and be used over an extended stretch of discourse, and would require maintenance in memory over a period of time. The use of space to provide a route description or a topographic representation of a spatial scene relies on a conceptual image of a real-​world or imagined scene. The use of different perspectives or spatial formats (Perniss 2007b, 2007c; Janzen 2012), the occurrence of ‘locus doubling’ (van Hoek 1992; Emmorey & Falgier 2004) as 394

395

Use of sign space

well as the interpretation of spatial arrays from the signer’s viewpoint (Pyers et al. 2015) are good candidates for the need for image transformation. Emmorey et al. (1993) tested these abilities in signers and non-​signers on non-​linguistic tasks and found that ASL signers (deaf and hearing) exhibited better image generation and transformation abilities compared to non-​signers, while image maintenance abilities did not differ between the groups. For example, in the image generation task, participants were shown block letters drawn on 4 × 5 grids and were later prompted (by lowercase cursive letters) to generate the block letters in mental imagery. Compared to non-​signers, ASL signers were more accurate in judging whether a certain cell in the grid had been part of the previously seen block letter (‘yes’) or not (‘no’) (see Figure 17.3 for an example).

No

Yes

Figure 17.3  Adapted image of example stimuli used by Emmorey et al. (1993) in the image generation task

The image transformation task tested mental rotation abilities by showing participants pairs of images consisting of shapes made from connected cells in a 4 × 5 grid. Signers were faster than non-​signers at deciding whether the two shapes were the same, regardless of their orientations (see Figure 17.4).

Target

Same shape at different degrees of rotation

Figure 17.4  Adapted image of example stimuli used by Emmorey et al. (1993) in the image transformation, or mental rotation, task

These findings suggest that visual imagery is involved in the processing of ASL, and that the visual abilities that are particularly recruited in using a sign language are enhanced in non-​linguistic domains as a result. Emmorey & Kosslyn (1996) extended these findings by showing a right-​hemisphere advantage for image generation in deaf signers. Participants were cued with visual stimuli in either the right visual field (left hemisphere) or the left visual field (right hemisphere). Consistent with the results from neuroimaging studies discussed in Section 17.4.2.2, these findings suggest that the enhanced imagery abilities of signers are linked to right-​hemisphere processing. The relationship between spatial language and spatial cognition was also investigated by Pyers et al. (2010). Successive cohorts of Nicaraguan Sign Language signers were tested on their spatial language abilities (providing locations of objects in a room) and spatial cognition abilities (object search under disorientation and rotated container conditions). 395

396

Pamela Perniss

Measures that operationalized signers’ spatial language abilities included mention of, consistent placement of, and simultaneous representation of Figure and Ground objects (see Perniss et al. (2015) and Sümer (2015) for similar measures). Pyers et al. (2010) found a correlation between spatial language abilities and spatial reasoning abilities, suggesting that consistent, linguistically conventionalized use of spatial language is necessary for mature spatial cognition, at least in the domains tested. The second (younger) cohort of signers exhibited more consistent spatial language than the first (older) cohort of signers (who developed an early form of the language), and also consistently outperformed the first cohort on the search tasks.

17.8  Conclusion This chapter has provided an experimental perspective on the use of space in sign language. It has covered the linguistic processing of locations in sign space as they are used in abstract or topographic functions, including both behavioral and neuroimaging results; locative expression and the use of perspective (or spatial formats) and viewpoint; the acquisition of spatial language in both L1 and L2 acquisition; and the relationship between spatial language and non-​linguistic spatial abilities. In doing so, the chapter has also addressed the use of space with respect to the relationship between sign and gesture (see also Woll & Vinson, Chapter 25) and the influence of the visual modality on the structure of spatial language. The chapter reviewed older research, which focused primarily on the difference between abstract (referential) and topographic functions of space, as well as more recent research, where the focus is more on understanding the interface between gradient, analogue and linguistic, categorical elements in sign language structure and processing. Future experimental work and neuroimaging studies will continue to provide new insight into this question. This chapter has kept a quite narrow focus on the use of space as pertaining to the use and processing of referent-​location associations in signing space. As such, it has only touched on related topics like reference tracking (e.g., Perniss & Özyürek 2015; Frederiksen & Mayberry 2016), discourse cohesion (e.g., Winston 1995), constructed action (e.g., Quinto-​Pozos & Mehta 2010; Cormier et al. 2016), and handshape encoding (e.g., Brentari et al. 2015; Padden et al. 2015; Sevcikova Sehyr et al. 2015; Ortega et al. 2017). These domains deserve mention, however, as they contribute in important ways to our understanding of the use of space in sign language.

Notes 1 Categorical (yet iconic) relational lexemes also exist, for instance, signs glossed as ON, IN, UNDER, LEFT, but the use of classifier predicates predominates in spatial language (e.g., Perniss et al. 2015). 2 Padden (1990) proposed a tripartite division of verbs that comprises spatial verbs, agreement verbs, and plain verbs. Plain verbs do not move in space to mark arguments and tend to be body-​ anchored. With plain verbs, word order or the use of a person agreement marker (PAM; Mathur & Rathmann 2012; Sapountzaki 2012) conveys information about argument structure (as in the example with LIKE in Section 17.2.1 above). Agreement (indicating) verbs and spatial verbs move in space to mark arguments, whereby agreement (indicating) verbs mark animate arguments corresponding to agent/​patient roles, and spatial verbs mark spatial arguments corresponding to source/​goal  roles.

396

397

Use of sign space 3 In this study, the category of indicating verbs subsumes Padden’s (1990) categories of agreement verbs and spatial verbs. 4 See Wienholz et  al. (2018) for support from an ERP study for a default association of the first referent with the ipsilateral side and of the second referent with the contralateral side of signing space. 5 Liddell’s use of the word ‘gestural’ reflects his argument that signers and gesturers (i.e., hearing non-​signers) use space in a comparable way to depict spatial layouts (Liddell 1996). 6 A recent paper by Steinbach & Onea (2016) proposes a Discourse Representational Theory (DRT) analysis of R-​loci that offers a solution for the listability problem. Regions of signing space become available for R-​loci by virtue of being defined as standing in opposition to regions with already introduced R-​loci. As such the grammar need only delimit an area to be specified, not list the individual locations separately (see also Kuhn, Chapter 21). 7 Emmorey & Herzig (2003) similarly find differential evidence for handshape vs. location encoding, also providing evidence that classifier handshapes are interpreted categorically. The findings on handshape are not detailed here given the chapter’s focus on the use of space (and thus on location/​motion encoding); but see Zwitserlood (Chapter 8) for discussion. 8 The use of the signer’s viewpoint for encoding viewpoint-​dependent spatial relationships seems more robust for left-​right relationships compared to front-​back relationships (Emmorey 1996; Sümer et al. 2016, including a discussion for why this may be the case). 9 Emmorey et  al. (2002) compare route descriptions in ASL and English, and compare both sign with speech and sign with gesture, noting interesting similarities between the use of perspective in both languages, as evidenced by both the comparison across and within modalities.

References Atkinson, Jo, Bencie Woll, & Susan Gathercole. 2003. The impact of developmental visuospatial learning difficulties on British Sign Language. Neurocase 8. 100–​118. Atkinson, Jo, Jane Marshall, Bencie Woll, & Alice Thacker. 2005. Testing comprehension abilities in users of British Sign Language after CVA. Brain and Language 94. 233–​248. Benedicto, Elena & Diane Brentari. 2004. Where did all the arguments go? Argument-​changing properties of classifiers in ASL. Natural Language and Linguistic Theory 22(4). 743–​810. Boers-​Visker, Eveline & Beppie van den Bogaerde. 2019. Learning to use space in the L2 acquisition of a signed language: Two case studies. Sign Language Studies 19(3). 410–​452. Brentari, Diane, Alessio Di Renzo, Jonathan Keane, & Virginia Volterra. 2015. Cognitive, cultural and linguistic sources of a handshape distinction expressing agentivity. Topics in Cognitive Science 7(1). 95–​123. Capek, Cheryl M., Giordana Grossi, Aaron J. Newman, Susan L. McBurney, David Corina, Brigitte Roeder, & Helen J. Neville. 2009. Brain systems mediating semantic and syntactic processing in deaf native signers: Biological invariance and modality specificity. Proceedings of the National Academy of Sciences of the United States of America 106(21). 8784–​8789. Clark, Herbert H. 1973. Space, time, semantics, and the child. In Timothy E. Moore (ed.), Cognitive development and acquisition of language, 27–​63. New York: Academic Press. Cormier, Kearsy. 2012. Pronouns. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 227–​244. Berlin: de Gruyter Mouton. Cormier, Kearsy, David Quinto-​Pozos, Zed Sevcikova, & Adam Schembri. 2012. Lexicalisation and de-​lexicalisation processes in sign languages: Comparing depicting constructions and viewpoint gestures. Language and Communication 32(4). 329–​348. Cormier, Kearsy, Adam Schembri, & Bencie Woll. 2013. Pronouns and pointing in sign languages. Lingua 137. 230–​247. Cormier, Kearsy, Jordan Fenlon, & Adam Schembri. 2015. Indicating verbs in British Sign Language favour motivated use of space. Open Linguistics 1(1). 684–​707. Cormier, Kearsy, Sandra Smith, & Zed Sevcikova Sehyr. 2016. Rethinking constructed action. Sign Language & Linguistics 18(2). 167–​204. Courtin, Cyril, Pierre Y. Hervé, Laurent Petit, Laure Zago, Mathieu Vigneau, Virginie Beaucousin, Gael Jobard, Bernard M. Mazoyer, Emmanuel Mellet, & Nathalie Tzourio-​Mazoyer. 2010. The

397

398

Pamela Perniss neural correlates of highly iconic structures and topographic discourse in French Sign Language as observed in six hearing native signers. Brain & Language 114. 180–​192. Cuxac, Christian. 1999. The expression of spatial relations and the spatialization of semantic relations in French Sign Language. In Catherine Fuchs & Stéphane Robert (eds.), Language diversity and cognitive representations, 123–​142. Amsterdam: John Benjamins. Cuxac, Christian & Marie-​ Anne Sallandre. 2007. Iconicity and arbitrariness in French Sign Language: Highly iconic structures, degenerated iconicity and diagrammatic iconicity. In Elena Pizzuto, Paolo Pietrandrea, & Raffaele Simone (eds.), Verbal and signed languages: Comparing structures, constructs and methodologies, 13–​33. Berlin: Mouton de Gruyter. Davidson, Kathryn. 2015. Quotation, demonstration, and iconicity. Linguistics and Philosophy 38(6). 477–​520. DeMatteo, Asa. 1977. Visual imagery and visual analogues in American Sign Language. In Lynn A. Friedman (ed.), On the other hand: New perspectives on American Sign Language, 109–​136. London: Academic Press. de Vos, Connie & Roland Pfau. 2015. Sign language typology:  The contribution of rural sign languages. Annual Review of Linguistics 1. 265–​288. Emmorey, Karen. 1996. The confluence of space and language in signed languages. In Paul Bloom, Mary A. Peterson, Lynn Nadel, & Merrill F. Garrett (eds.), Language and space, 171–​209. Cambridge MA: MIT Press. Emmorey, Karen. 1997. The neural substrates for spatial cognition and language:  Insights from sign language. Paper presented at the Cognitive Science Society Meeting (CogSci). Emmorey, Karen. 2002. Language, cognition and the brain:  Insights from sign language research. Mahwah, NJ: Lawrence Erlbaum. Emmorey, Karen (ed.). 2003. Perspectives on classifier constructions in sign languages. Mahwah, NJ: Lawrence Erlbaum. Emmorey, Karen, David Corina, & Ursula Bellugi. 1995. Differential processing of topographic and referential functions of space. In Karen Emmorey & Judy S. Reilly (eds.), Language, gesture, and space, 43–​62. Hillsdale, NJ: Lawrence Erlbaum. Emmorey, Karen, Hanna Damasio, Stephen McCullough, Thomas Grabowski, Laura L.B. Ponto, Richard D. Hichwa, & Ursula Bellugi. 2002. Neural systems underlying spatial language in American Sign Language. Neuroimage 17. 812–​824. Emmorey, Karen & Brenda Falgier. 1999. Talking about space with space. In Elisabeth A. Winston (ed.), Storytelling and conversation: Discourse in Deaf communities, 3–​26. Washington, DC: Gallaudet University Press. Emmorey, Karen & Brenda Falgier. 2004. Conceptual locations and pronominal reference in American Sign Language. Journal of Psycholinguistic Research 33(4). 321–​331. Emmorey, Karen, Thomas Grabowski, Stephen McCullough, Laura L.B. Ponto, Richard D. Hichwa, & Hanna Damasio. 2005. The neural correlates of spatial language in English and American Sign Language: A PET study with hearing bilinguals. Neuroimage 24. 832–​840. Emmorey, Karen & Melissa Herzig. 2003. Categorical versus gradient properties of classifier constructions in ASL. In Karen Emmorey (ed.), Perspectives on classifier constructions in sign languages, 221–​246. Mahwah, NJ: Lawrence Erlbaum. Emmorey, Karen, Edward Klima, & Gregory Hickok. 1998. Mental rotation within linguistic and nonlinguistic domains in users of American Sign Language. Cognition 68. 221–​246. Emmorey, Karen & Stephen M. Kosslyn. 1996. Enhanced image generation abilities in deaf signers: A right hemisphere effect. Brain and Cognition 32. 28–​44. Emmorey, Karen, Stephen M. Kosslyn, & Ursula Bellugi. 1993. Visual imagery and visual-​spatial language: Enhanced imagery abilities in deaf and hearing ASL signers. Cognition 46. 139–​181. Emmorey, Karen & Diane Lillo-​Martin. 1995. Processing spatial anaphora: Referent reactivation with overt and null pronouns in American Sign Language. Language and Cognitive Processes 10. 631–​664. Emmorey, Karen, Stephen McCullough, Sonya Mehta, Laura L.B. Ponto, & Thomas J. Grabowski. 2013. The biology of linguistic expression impacts neural correlates for spatial language. Journal of Cognitive Neuroscience 25(4). 517–​533. Emmorey, Karen, Freda Norman, & Lucinda O’Grady. 1991. The activation of spatial antecedents from overt pronouns in American Sign Language. Language and Cognitive Processes 6(3). 207–​228.

398

399

Use of sign space Emmorey, Karen & Barbara Tversky. 2002. Spatial perspective choice in ASL. Sign Language & Linguistics 5(1). 3–​25. Emmorey, Karen, Barbara Tversky, & Holly A. Taylor. 2000. Using space to describe space: Perspective in speech, sign, and gesture. Spatial Cognition and Computation 2. 157–​180. Engberg-​Pedersen, Elisabeth. 1993. Space in Danish Sign Language:  The semantics and morphosyntax of the use of space in a visual language. Hamburg: Signum. Engberg-​Pedersen, Elisabeth. 2003. How composite is a fall? Adult’s and children’s descriptions of different types of falls in Danish Sign Language. In Karen Emmorey (ed.), Perspectives on classifier constructions in sign languages, 311–​332. Mahwah, NJ: Lawrence Erlbaum. Ferrara, Lindsay & Anna-​Lena Nilsson. 2017. Describing spatial layouts as an L2M2 signed language learner. Sign Language & Linguistics 17(1). 1–​26. Frederiksen, Anne T. & Rachel I. Mayberry. 2016. Who’s on first? Investigating the referential hierarchy in simple ASL narratives. Lingua 180. 49–​68. Hänel-​Faulhaber, Barbara, Nils Skotara, Monique Kügow, Uta Salden, Davide Bottari, & Brigitte Roeder. 2014. ERP correlates of German Sign Language processing in deaf native signers. BMC Neuroscience 15: 62. Geraci, Carlo. 2009. Real world and copying epenthesis: The case of classifier predicates in Italian Sign Language. In Anisa Schardl, Martin Walkow, & Muhammad Abdurrahman (eds.), Proceedings of the North East Linguistic Society 38, 237–​250. Amherst, MA: GSLA. Geraci, Carlo. 2014. Spatial syntax in your hands. NELS 44:  Proceedings of the 44th Annual Meeting of the North East Linguistic Society, vol. 1. 123–​134. Hickok, Gregory, Ursula Bellugi, & Edward Klima. 1996. The neurobiology of sign language and its implications for the neural basis of language. Nature 381. 699–​702. Hickok, Gregory, Herbert Pickell, Edward Klima, & Ursula Bellugi. 2009. Neural dissociation in the production of lexical versus classifier signs in ASL: Distinct patterns of hemispheric asymmetry. Neuropychologia 47. 382–​387. Hosemann, Jana, Annika Herrmann, Holger Sennhenn-​Reulen, Matthias Schlesewsky, & Markus Steinbach. 2018. Agreement or no agreement. ERP correlates of verb agreement violation in German Sign Language. Language, Cognition and Neuroscience 33(9). 1107–​1127. Janzen, Terry. 2012. Two ways of conceptualizing space: Motivating the use of static and rotated vantage point space in ASL discourse. In Barbara Dancygier & Eve Sweetser (eds.), Viewpoint in language: A multimodal perspective, 156–​174. Cambridge: Cambridge University Press. Johnston, Trevor. 2013. Towards a comparative semiotics of pointing actions in signed and spoken languages. Gesture 3(2). 109–​142. Johnston, Trevor & Adam Schembri. 2007. Australian Sign Language: An introduction to sign language linguistics. Cambridge: Cambridge University Press. Klima, Edward & Ursula Bellugi. 1979. The signs of language. Cambridge, MA:  Harvard University Press. Liddell, Scott K. 1990. Four functions of a locus: Reexamining the structure of space in ASL. In Ceil Lucas (ed.), Sign language research:  Theoretical issues, 176–​198. Washington, DC:  Gallaudet University Press. Liddell, Scott K. 1996. Spatial representations in discourse:  Comparing spoken and signed language. Lingua 98. 145–​167. Liddell, Scott K. 2000. Indicating verbs and pronouns: Pointing away from agreement. In Karen Emmorey & Harlan L. Lane (eds.), The signs of language revisited: An anthology to honor Ursula Bellugi and Edward Klima, 303–​320. Mahwah, NJ: Lawrence Erlbaum. Liddell, Scott K. 2003. Grammar, gesture, and meaning in American Sign Language. Cambridge: Cambridge University Press. Lillo-​Martin, Diane & Edward Klima. 1990. Pointing out differences:  ASL pronouns in syntactic theory. In Susan D. Fischer & Patricia Siple (eds.), Theoretical issues in sign language research: Vol. 1: Linguistics, 191–​210. Chicago: University of Chicago Press. MacSweeney, Mairead, Bencie Woll, Ruth Campbell, Gemma A. Calvert, Philip K. McGuire, Anthony S. David, Andrew Simmons, & Michael J. Brammer. 2002. Neural correlates of British Sign Language comprehension: Spatial processing demands of topographic language. Journal of Cognitive Neuroscience 14. 1064–​1075. Mainwaring, Scott D., Barbara Tversky, Motoko Ohgishi, & Diane J. Schiano. 2003. Descriptions of simple spatial scenes in English and Japanese. Spatial Cognition and Computation 3. 3–​42.

399

400

Pamela Perniss Mandel, Mark. 1977. Iconic devices in American Sign Language. In Lynn A. Friedman (ed.), On the other hand: New perspectives on American Sign Language, 57–​107. London: Academic Press. Marshall, Chloë R and Morgan, Gary. 2015. From gesture to sign language: conventionalization of classifier constructions by adult hearing learners of BSL. Topics in Cognitive Science 7. 61–​80. Martin, Amber Joy & Maria D. Sera. 2006. The acquisition of spatial constructions in American Sign Language and English. Journal of Deaf Studies and Deaf Education 11(4). 391–​402. Mathur, Gaurav & Christian Rathmann. 2012. Verb agreement. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 136–​157. Berlin: De Gruyter Mouton. Meir, Irit, Wendy Sandler, Carol Padden, & Mark Aronoff. 2010. Emerging sign languages. In Marc Marschark, & Patricia Elizabeth Spencer (eds.), Oxford handbook of Deaf studies, language, and education, 267–​280. Oxford: Oxford University Press. Meir, Irit, Carol A. Padden, Mark Aronoff, & Wendy Sandler. 2012. Body as subject. Journal of Linguistics 43(3). 531–​563. Morgan, Gary & Bencie Woll. 2007. Understanding sign language classifiers through a polycomponential approach. Lingua 117. 1159–​1168. Morgan, Gary, Rosalind Herman, Isabelle Barriere, & Bencie Woll. 2008. The onset and the mastery of spatial language in children acquiring British Sign Language. Cognitive Development 23.  1–​19. Neville, Helen J., Daphne Bavelier, David Corina, Josef Rauschecker, Avi Karni, Anil Lalwani, Allen Braun, Vince Clark, Peter Jezzard, & Robert Turner. 1998. Cerebral organisation for language in deaf and hearing subjects: Biological constraints and effects of experience. Proceedings of the National Academy of Sciences of the United States of America (PNAS) 95. 922–​929. Newport, Elissa L. & Ted R. Supalla. 1980. Clues from the acquisition of signed and spoken language. In Ursula Bellugi & Michael Studdert-​ Kennedy (eds.), Signed and spoken language: Biological constraints on form, 187–​211. Weinheim: Verlag Chemie. Newport, Elissa L. & Richard P. Meier. 1985. Acquisition of American Sign Language. In Dan I. Slobin (ed.), The crosslinguistic study of language acquisition. Vol. 1: The data, 881–​938. Hillside, NJ: Lawrence Erlbaum. Ortega, Gerardo, Beyza Sümer, & Aslı Özyürek. 2017. Type of iconicity matters in the vocabulary development of signing children. Developmental Psychology 53(1). 89–​99. Özyürek, Aslı, Inge Zwitserlood, & Pamela Perniss. 2010. Locative expressions in signed languages: A view from Turkish Sign Language (TİD). Linguistics 48(5). 1111–​1145. Özyürek, Aslı & Pamela Perniss. 2011. Event representation in sign language:  A crosslinguistic perspective. In Jürgen Bohnemeyer & Eric Pederson (eds.), Event representation in language:  Encoding events at the language-​cognition interface, 84–​107. Cambridge:  Cambridge University Press. Padden, Carol A. 1990. The relation between space and grammar in ASL verb morphology. In Ceil Lucas (ed.), Sign language research:  Theoretical issues, 118–​132. Washington, DC:  Gallaudet University Press. Padden, Carol, So-​One Hwang, Ryan Lepic, & Sharon Seegers. 2015. Tools for language: Patterned iconicity in sign language nouns and verbs. Topics in Cognitive Science 7(1). 81–​94. Perniss, Pamela. 2007a. Space and iconicity in German Sign Language (DGS). Nijmegen:  Max Planck Institute for Psycholinguistics PhD dissertation. Perniss, Pamela. 2007b. Achieving spatial coherence in German Sign Language narratives: The use of classifiers and perspective. Lingua 117. 1315–​1338. Perniss, Pamela. 2007c. Locative functions of simultaneous perspective constructions in German Sign Language narratives. In Myriam Vermeerbergen, Lorraine Leeson, & Onno A. Crasborn (eds.), Simultaneity in signed languages:  Form and function, 27–​ 54. Amsterdam:  John Benjamins. Perniss, Pamela. 2012. Use of space. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 412–​431. Berlin: De Gruyter Mouton. Perniss, Pamela & Aslı Özyürek. 2008. Representations of action, motion, and location in sign space: A comparison of German (DGS) and Turkish (TİD) Sign Language narratives. In Josep Quer (ed.), Signs of the time. Selected papers from TISLR 8, 353–​376. Hamburg: Signum. Perniss, Pamela & Aslı Özyürek. 2015. Visible cohesion: a comparison of reference tracking in sign, speech, and co-​speech gesture. Topics in Cognitive Science 7(1). 36–​60.

400

401

Use of sign space Perniss, Pamela, Inge Zwitserlood, & Aslı Özyürek. 2015. The design space of spatial language. Language 91(3). 611–​641. Pfau, Roland, Martin Salzmann, & Markus Steinbach. 2018. The syntax of sign language agreement: Common ingredients, but unusual recipe. Glossa 3(1): 107. 1–​46. Piaget, Jean & Bärbel Inhelder. 1956. The child’s conception of space. London:  Routledge & Kegan Paul. Poizner, Howard, Edward Klima, & Ursula Bellugi. 1987. What the hands reveal about the brain. Cambridge, MA: MIT Press. Pyers, Jennie, Anna Shusterman, Ann Senghas, Elisabeth S. Spelke, & Karen Emmorey. 2010. Evidence from an emerging sign language reveals that language supports spatial cognition. Proceedings of the National Academy of Sciences of the United States of America 107(27). 12116–​12120. Pyers, Jennie, Pamela Perniss, & Karen Emmorey. 2015. Viewpoint in the visual-​spatial modality: The coordination of spatial perspective. Spatial Cognition and Computation 15(3). 143–​169. Quer, Josep. 2011. When agreeing to disagree is not enough: Further arguments for the linguistic status of sign language agreement. Theoretical Linguistics 4. 189–​196. Quinto-​Pozos, David & Sarika Mehta. 2010. Register variation in mimetic gestural complements to signed language. Journal of Pragmatics 42. 557–​584. Quinto-​Pozos, David & Fey Parrill. 2015. Signers and co-​speech gesturers adopt similar strategies for portraying viewpoint in narratives. Topics in Cognitive Science 7(1). 12–​35. Sapountzaki, Galini. 2012. Agreement auxiliaries. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 204–​227. Berlin: De Gruyter Mouton. Schembri, Adam. 2003. Rethinking “classifiers” in signed languages. In Karen Emmorey (ed.), Perspectives on classifier constructions in sign languages, 3–​34. Mahwah, NJ: Lawrence Erlbaum. Schembri Adam, Caroline Jones, & Denis Burnham. 2005. Comparing action gestures and classifier verbs of motion: Evidence from Australian Sign Language, Taiwan Sign Language, and nonsigners’ gestures without speech. Journal of Deaf Studies and Deaf Education 10. 272–​290. Schembri, Adam, Kearsy Cormier, & Jordan Fenlon. 2018. Indicating verbs as typologically unique constructions: Reconsidering verb ‘agreement’ in sign languages. Glossa 3(1): 89. 1–​40. Schober, Michael F. 1993. Spatial perspective taking in conversation. Cognition 47. 1–​24. Sevcikova Sehyr, Zed & Kearsy Cormier. 2015. Perceptual categorization of handling handshapes in British Sign Language. Language and Cognition 8(4). 501–​532. Slobin, Dan I., Nini Hoiting, Marlos Kuntze, Reyna Lindert, Amy Weinberg, Jennie Pyers, Michelle Anthony, Yael Biederman, & Helen Thumann. 2003. A cognitive/​functional perspective on the acquisition of “classifiers”. In Karen Emmorey (ed.), Perspectives on classifier constructions in sign languages, 271–​298. Mahwah, NJ: Lawrence Erlbaum. So, Wing Chee, Marie Coppola, Vincent Liccidarello, & Susan Goldin-​Meadow. 2005. The seeds of spatial grammar in the manual modality. Cognitive Science 29. 23–​37. Steinbach, Markus & Edgar Onea. 2016. A DRT analysis of discourse referents and anaphora resolution in sign language. Journal of Semantics 33. 409–​448. Sümer, Beyza. 2015. Acquisition of spatial language by signing and speaking children:  A comparison of Turkish Sign Language (TİD) and Turkish. Nijmegen:  Max Planck Institute for Psycholinguistics PhD dissertation. Sümer, Beyza, Inge Zwitserlood, Pamela Perniss, & Aslı Özyürek. 2013. Acquisition of locative expressions in children learning Turkish Sign Language (TİD) and Turkish. In Engin Arik (ed.), Current directions in Turkish Sign Language research, 243–​272. Newcastle upon Tyne: Cambridge Scholars Publishing. Sümer, Beyza, Pamela Perniss, Inge Zwitserlood, & Aslı Özyürek. 2014. Learning to express “left-​ right” & “front-​behind” in a sign versus spoken language. In Paul Bello, Marcello Guarini, Marjorie McShane, & Brian Scassellati (eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society, 1550–​1555. Austin, TX: Cognitive Science Society. Sümer, Beyza, Pamela Perniss, & Aslı Özyürek. 2016. Viewpoint preferences in signing children’s spatial descriptions. In Jennifer Scott & Deb Waughtal (eds.), Proceedings of the 40th Annual Boston University Conference on Language Development (BUCLD 40), 360–​ 374. Boston, MA: Cascadilla Press. Supalla, Ted R. 1978. Morphology of verbs of motion and location in American Sign Language. In Frank Caccamise (ed.), American Sign Language in a bilingual, bicultural context: Proceedings

401

402

Pamela Perniss of the National Symposium on Sign Language Research and Teaching, 27–​45. Silver Spring, MD: NAD. Supalla, Ted R. 1982. Structure and acquisition of verbs of motion and location in American Sign Language. San Diego, CA: UCSD PhD dissertation. Supalla, Ted. 1986. The classifier system in American Sign Language. In Colette Craig (ed.), Noun classes and categorization, 181–​214. Amsterdam: John Benjamins. Tang, Gladys, Felix Sze, & Scholastica Lam. 2007. Acquisition of simultaneous constructions by deaf children of Hong Kong Sign Language. In Myriam Vermeerbergen, Lorraine Leeson, & Onno Crasborn (eds.), Simultaneity in signed languages:  Form and function, 283–​316. Amsterdam: John Benjamins. Taylor, Holly A. & Barbara Tversky. 1996. Perspective in spatial descriptions. Journal of Memory and Language 35(3). 371–​391. Valli, Clayton & Ceil Lucas. 1995. The linguistics of American Sign Language: An introduction (2nd edition). Washington, DC: Gallaudet University Press. Van Hoek, Karen. 1992. Conceptual spaces and pronomical reference in American Sign Language. Nordic Journal of Linguistics 15. 183–​199. Wienholz, Anne, Derya Nuhbalaoglu, Nivedita Mani, Annika Herrmann, Edgar Onea, & Markus Steinbach. 2018. Pointing to the right side? An ERP study on anaphora resolution in German Sign Language. PLoS ONE 13(9). 1–​19. Winston, Elisabeth A. 1995. Spatial mapping in comparative discourse frames. In Karen Emmorey & Judy S. Reilly (eds.), Language, gesture, and space, 87–​114. Hillsdale, NJ: Lawrence Erlbaum. Zwitserlood, Inge. 2012. Classifiers. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 158–​186. Berlin: De Gruyter Mouton.

402

403

18 SPECIFICITY AND DEFINITENESS Theoretical perspectives Gemma Barberà 18.1  Introduction Definiteness and specificity are two interrelated but independent notions. While definiteness encodes the information that the sender assumes that the addressee has, specificity encodes the knowledge that the sender has and the anchoring to an item. Definite noun phrases (NPs) encode that both sender and addressee may identify the discourse referent. Indefinite NPs mark that the addressee may not identify the entity being talked about. As for specificity, it is generally assumed that while specific indefinite NPs exhibit a sender-​addressee asymmetry, since only the sender may identify the discourse referent or may anchor it to a discourse item, non-​specific indefinite NPs are symmetric since they mark that neither the sender nor the addressee can identify or anchor it.1 These general semantic and pragmatic concepts may be mirrored in the linguistic system. In English, there is an overt marking for definite NPs (1a) and an overt marking for indefinite NPs (1b). The indefinite article a may be ambiguous between a specific and a non-​specific reading. That is, in the specific reading, only the sender may identify the entity being talked about. In the non-​specific reading, none of the participants in the context may identify it. Although specificity is not overtly marked in the English determiner system, it has observable effects on co-​reference. In English, the kind of co-​ referential pronoun disambiguates the two possible readings (Partee 1970). Under the specific reading, the indefinite NP ‘a book’ refers to an identifiable book (2a). Under the non-​specific reading, Joana is looking for an element of the kind ‘syntax book’, but there is not any particular book that the sender has in mind when uttering (2b). (1)  a.  The book that we read last month was about definiteness. b. Next month, we will read a book about definiteness. (2)  Joana wants to read a book about syntax … a.  but she cannot find it. b. but she cannot find one. 403

404

Gemma Barberà

The range of NP types that have definiteness as part of their meaning include determiners (the English definite article the), demonstratives (this, that, those), proper nouns (Joana, Martí), possessives (my, your, her), and personal pronouns (you, she, they). Indefiniteness is encoded with the indefinite determiner in languages that have one (for instance, English a), generic ontological-​category nouns (such as someone, something, somewhere in English), interrogative pronouns (such as neaq-​naa ‘somebody/​who’ and qway ‘something/​what’ in Khmer (Haspelmath 1997:  27)), one-​based definite particles (English one, French on, German man), cardinals, and quantifiers (such as most, many). From a theoretical point of view, definiteness is usually associated with uniqueness and familiarity. On the one hand, uniqueness approaches are built on the insight that a definite description is used to refer to entities that have a role or a property which is unique (Kadmon 1990; Abbott 1999). Uniqueness means that there is one and no more than one entity that has a particular property, as exemplified in (3). (3)  The sun is shining. On the other hand, pragmatic theories tend to treat familiarity and anaphoricity as the central notion for definiteness (Kamp 1981; Heim 1982; Roberts 2003). They are based on the idea that definite descriptions serve to pick out discourse referents that are in some sense familiar (i.e., identifiable) to the discourse participants, because they are co-​ present (4a), culturally shared, and therefore part of the common ground (4b) or already mentioned in the discourse (4c). (4)  a.  Just give the shelf a quick wipe, will you?, before I put this vase on it. b. The president is visiting the school tomorrow. c. An elegant dark-​haired woman, a man with dark glasses, and two children entered the compartment. I immediately recognized the woman. The two types of definites have been shown cross-​linguistically to include a specialized marking: one type of definite, involving weak articles is based on uniqueness, whereas strong article definites crucially involve an anaphoric link (Schwarz 2009). Specificity is encoded differently in each language. Some languages encode it in the article system, others encode it with affixes, others encode it in the expression of mood, and others lack encoding of this semantic-​pragmatic notion. Samoan and Maori are two Polynesian languages with an article system that distinguishes specificity rather than definiteness (Lyons 1999). Samoan uses the article le with specific NPs, which indicates that the discourse referent refers to one particular entity regardless of whether it is definite or indefinite. The other article (se) is used with non-​specific discourse referents, which do not refer to a particular, specified item (Mosel & Hovdhaugen 1992, cited in Lyons 1999:  57). In Maori, the article he (which does not distinguish number) is used when the kind of entity is crucial, and teetahi/​eetahi when the number is significant (Chung & Ladusaw 2004). The meanings and patterns of use of Maori articles are not yet fully established, but it seems that its article system relates partly to the distinction between specific and non-​specific, rather than definite and indefinite. Another way of marking specificity is by means of affixes. According to Enç (1991), Turkish encodes specificity with an accusative affix. The following minimal pair taken from Enç (1991) shows that 404

405

Specificity and definiteness

when the NP has overt case morphology, it refers to a specific discourse referent (5a). The indefinite NP with accusative case has a covert partitive reading, and it introduces into the domain of discourse individuals from a previously given set. This contrasts with (5b), where the NP without case morphology refers to a non-​specific entity. (5) a.  Iki kiz-​i taniyordum. two  girl-​ACC  I-​knew ‘I knew two of the girls.’ b. Iki kiz  taniyordum. two girl  I-​knew ‘I knew two girls.’                   

  (Turkish, Enç 1991: 6)

Leaving aside the overt marking, from a theoretical point of view the different kinds of specific indefinites have been extensively discussed in the literature (von Heusinger 2002, 2011). Among the various types of specific indefinites, for the purpose of the present article three types of specificity are considered, namely scope, epistemicity, and partitivity. Section 18.3.1 applies the two types of definiteness to sign language data, and Section 18.3.2 develops each specificity notion according to sign language examples.

18.2  Manual and non-​manual marking 18.2.1  Lexical determiners and non-​manual marking Sign languages are provided with a rich array of lexical signs expressing indefiniteness, but to the best of my knowledge, only few lexical signs have been claimed so far to be specialized for definiteness. In this first section, the focus is on lexical determiners and non-​ manual marking. Other markings of definiteness include the overt prenominal index sign (see Sections 18.2.2 and 18.3.1), as well as particular non-​manual marking (see below). In American Sign Language (ASL), the sign SELF/​G2 has been considered to be a definite article (6a, b), but also a specificity marker (Wilbur 1996), and a presuppositionality marker (Mathur 1996).3 (6)  a. 

YESTERDAY MY CAR SELF/​G BREAK-​DOWN

‘Yesterday my car broke down.’ b.

BUT LAST YEAR, ONCE FATHER SELF/​G FUNNY NOT

‘But one time last year, my father wasn’t at all funny.’             (ASL, Fischer & Johnson 2012[1982]: 248) According to the extended typological study of indefiniteness in spoken languages, there are three different types of derivational bases from which indefinite determiners and pronouns are derived. First, indefinites appear to have been grammaticalized from the numeral ‘one’. Second, they have evolved from interrogative elements, like ‘who’, ‘what’, and ‘where’. Finally, they have also evolved from generic ontological-​category nouns, such as ‘person’ or ‘thing’ (Haspelmath 1997; Bhat 2005). This pattern is also attested in some sign languages. In ASL, for instance, the indefinite animate determiner translated as ‘someone’ has the same handshape and orientation as the numeral ONE and the classifier for a person or an animate entity, with an additional slight tremoring and circular movement. This happens to be also the case in British Sign Language (BSL; Cormier 405

406

Gemma Barberà

2012). As for ASL, while the numeral sign ONE triggers a specific interpretation (7a), the indefinite SOMETHING/​ONE, articulated with a tremoring movement, triggers a non-​ specific interpretation (7b). The non-​manuals that correlate with this sign correspond to those associated with uncertainty, namely tensed nose, lowered brows, and sometimes also raising the shoulders (MacLaughlin 1997). (7)  a. 

ONE DOG BITE IX-​1

‘A (specific) dog bit me.’ b.

SOMETHING/​ONE DOG BITE IX-​1

‘Some dog bit me.’  

(ASL, MacLaughlin 1997: 118)

In Hong Kong Sign Language (HKSL) and Catalan Sign Language (LSC), the indefinite determiner ONE has the same articulation as the numeral sign, but unlike ASL, it does not involve a tremoring movement (Tang & Sze (2002) for HKSL; Barberà (2015) for LSC). The sign ONE in HKSL usually selects a noun. When it occurs in prenominal position, the sign is ambiguous between a determiner and a numeral (8a). However, in postnominal position only the numeral reading is possible (8b). With indefinite non-​specific discourse referents, the index finger moves from left to right with a tremoring movement involving the wrist (8c). As for the non-​manual marking, the (in)definiteness distinction in HKSL is marked by eye gaze behavior: while definite determiners co-​occur with an eye gaze directed to the referential locus (R-​locus),4 for indefinite specific ones, eye gaze is directed towards the addressee. When the tremoring movement for non-​specific entities is articulated, eye gaze is never directed to space but instead towards the path of the hand, suggesting that there is no R-​locus established for the discourse referent (Tang & Sze 2002: 304). (8)  a. 

YESTERDAY ONE FEMALE-​KID COME

‘A girl came yesterday.’ (indefinite or numeral reading) b.

YESTERDAY FEMALE-​KID ONE COME

‘One girl came yesterday.’ (numeral reading only) c.

IX-​3 BOOK GIVE ONEdet-​path PERSON

‘His book was given to someone.’ (non-​specific reading)                     (HKSL, Tang & Sze 2002: 301–​304) Indefinite pronouns in sign languages may also derive from interrogative pronouns (Zeshan 2004). In LSC, the indefinite pronoun expressed with the interrogative pronoun may have three possible forms: the concatenation of the interrogative and a plural index pronoun (9a), the concatenation of the interrogative and a quantifier (9b), and the interrogative pronoun by itself (9c). Non-​manual marking licenses the indefinite interpretation, and therefore ambiguity does not arise. (9)  a. 

WHO^IX-​3pl.up MONEY 3-​STEAL-​3up

‘Someone stole the money.’ b.

(LSC, Barberà & Quer 2013: 254)

WHO^SOMEup BICYCLE 1-​STEAL-​3up+++ TWO TIMES

‘Someone stole my bicycle two times.’ c.

WHO MONEY 3-​STEAL-​3up

‘Someone stole the money.’

406

(LSC, Barberà 2016: 24)

407

Specificity and definiteness

In ASL, a sign with a similar articulation but distinguishable from the wh-​sign glossed as WHAT has been considered to have the same function as an indefinite pronoun (Conlin et  al. 2003). The articulation of the sign involves a single outward movement, rather than side-​to-​side shaking of the hands. Moreover, there is a tendency for this particle to phonologically cliticize to the sign it follows. The non-​manuals that correlate with this sign correspond to those associated to uncertainty, as defined above. Finally, indefinite pronouns appear to have been grammaticalized from generic nouns such as ‘person’ or ‘thing’. In LSC and Spanish Sign Language (LSE), the reduplicated form of the sign PERSON has an indefinite reading, similar to ‘people’ or ‘they’, as shown in (10) and (11) (Barberà & Costello 2017). (10) 

IX BALEAR PERSON+++ SPEAK CATALAN

‘In the Balearic Islands, people/​they speak Catalan.’               (LSC, Barberà & Costello 2017: 57) (11) 

IX ISRAEL IX PERSON+++ GO+++ PRAY SATURDAY

‘In Israel, people/​they pray on Saturdays.’

(LSE, Barberà & Costello 2017: 58)

Pfau & Steinbach (2006) describe the indefinite pronoun in German Sign Language (DGS) and Sign Language of the Netherlands (NGT) as a grammaticalized combination of the numeral ONE and the sign PERSON. This indefinite pronoun does not necessarily refer to only one person, as it may be also understood as plural. In example (13), for instance, it may very well be the case that two or three people are recruited for the dishes. (12) 

IX-​1 ONE^PERSON SEE

‘I’ve seen someone.’ (13) 

(DGS, Pfau & Steinbach 2006: 35)

ONE^PERSON WASH-​DISH DO MUST

‘Someone has to do the dishes.’

(NGT, Pfau & Steinbach 2006: 35)

Moreover, some sign languages (e.g., LSC and LIS) have a lexical sign that marks exclusiveness and thus non-​specificity. One example is the sign HEARING in LIS, which is used in contexts where the identity of the discourse referent is neither known nor close to the sender (Geraci 2012). As the example below shows, the use of this sign does not have a pejorative meaning, as it can be used in a context where the discourse referent helps the sender. (14) 

HEARING IX-​3up COME HELP

‘Someone (not known) came and helped.’

(LIS, Geraci 2012)

The non-​manual marking for definiteness and specificity differs across sign languages (for extensive discussion of non-​ manuals, see Wilbur, Chapter  24). In some sign languages, the co-​articulation of squinted eyes on the NP marks discourse referents that are both familiar and accessible by the discourse participants. This has been attested for Danish Sign Language (DSL; Engberg-​Pedersen 1993), Israeli Sign Language (ISL;

407

408

Gemma Barberà

Dachkovsky & Sandler 2009), and DGS (Herrmann 2013). Raised eyebrows (which may convey topic marking; see Kimmelman & Pfau, Chapter  26) also mark shared knowledge of the referent being talked about. In NGT and Russian Sign Language (RSL), a wrinkled nose appears to combine with NPs when the discourse referent is known to the addressee but not active in the discourse (Kimmelman (2014: 56), shown in Figure 18.1a for NGT). Indefiniteness is marked in LSC by sucking the cheeks in and pulling the mouth ends down, sometimes combined with a shrug (Barberà (2015), shown in Figure 18.1b).

a. Definiteness in NGT

b. Indefiniteness in LSC

Figure 18.1  (In)definiteness non-​manual marking in NGT and LSC (image in (b) from Barberà 2015: 147, Figure 40; © De Gruyter Mouton, reprinted with permission)

Particular non-​manual markings have been attested for the specific vs. non-​specific distinction. In HKSL, specificity is marked with eye gaze towards the addressee, and non-​specificity is marked with round protruded lips, lowered eyebrows, and a visible bilabial sound (Tang & Sze 2002). In the latter interpretation, eye gaze follows the path of the hand, suggesting that there is no localized referent in signing space (see Perniss, Chapter 17). Specificity in ASL is marked with a direct eye gaze to the spatial location, while non-​specificity is marked with a darting eye gaze generally towards an upward direction (Bahan 1996). Similar to ASL, non-​specificity in LSC is marked with a darting eye gaze towards the upper frontal plane co-​occurring with the NP (Barberà 2015).

Figure 18.2  Non-​specificity non-​manual marking in LSC (Barberà 2015: 189, Figure 54; © De Gruyter Mouton, reprinted with permission)

Besides the fact that sign languages employ complex manual and non-​manual marking to mark definiteness and specificity, this section has shown that the three different types of derivational bases from which indefinite markers are derived are also found in sign languages. Table 18.1 summarizes each type of marking as found so far in the few sign languages for which definiteness and specificity have been analyzed. Further studies with broader signed data sets will surely extend the present description.

408

409

Specificity and definiteness Table 18.1  Derivational bases of indefinite markers in different sign languages ASL Interrogative pronouns Generic ontological-​category nouns Numeral ‘one’

BSL

✓ ✓

DGS

HKSL

Combi​nation ✓



LSC

NGT

✓ ✓

Combi​nation



18.2.2  Order of signs within the noun phrase The internal order of signs in the NP is an important requirement that contributes to particular semantic readings (see also Abner, Chapter  10). Two aspects are crucial:  the position of the index sign with respect to the nominal and the modification of determiners and cardinal signs. Bare nouns are also significant in the interpretation of (in)definiteness in sign languages, and they are further treated in Section 18.3.1. Prenominal index signs, as in (15a), have been argued to function as a definite article in some sign languages like ASL (Bahan et al. 1995; MacLaughlin 1997; Wilbur 2008), whereas the postnominal index functions as an adverbial and does not display a definiteness restriction (15b). (15)  a.

IX WOMAN IX ARRIVE EARLY

‘The/​That woman there arrived early.’ b. 

JOHN SEE MAN IX

‘John saw a man there.’  

(ASL, Bahan et al. 1995: 3)

Other authors have previously claimed for different sign languages that an index sign co-​ occurring with a noun is used to express prominence or topicality of the corresponding discourse referent (Engberg-​Pedersen (1993) for DSL; Winston (1995) for ASL; Rinfret (2009) for Quebec Sign Language). This means that the most prominent discourse referent at a particular point in discourse will co-​occur with an index sign when it is first mentioned, and this may be analyzed in terms of its effects in the ongoing discourse. On a different view, Zimmer & Patschke (1990) for ASL and Bertone (2009) for LIS explicitly claim that an index sign directed to the signing space specifies the noun it co-​occurs with. However, no further comments on what is meant by specificity nor which properties are encompassed by it are given. As will be shown in Section 18.2.3, in LSC, the determiner index sign is not required within a definite NP, but rather it co-​occurs with specific and topical NPs (Barberà 2015). The order of nominal modification has revealed to be furthermore relevant in definiteness distinctions. Bringing together corpus data and elicited data, Mantovan (2017) shows that when the sign ONE in LIS appears before the noun, it is often ambiguous between the determiner and the cardinal status. When ONE follows the noun, it is associated with the quantificational reading, and it does not combine with the typical indefinite non-​manual marker, which consists in pulling the mouth ends down (Figure 18.1b).

409

410

Gemma Barberà

The interpretation of cardinals in LIS also varies according to the distribution of the NP. Mantovan (2017) shows that both prenominal and postnominal cardinals trigger an indefinite interpretation. For the definite reading to arise, only the postnominal cardinal or a complex NP formed by a noun, a cardinal, and a classifier are possible. The non-​manuals are crucial to disambiguate a postnominal cardinal. Within an indefinite interpretation, the NP is usually accompanied by backward-​tilted head and raised eyebrows (Figure 18.3a). Within a definite interpretation, the postnominal cardinal is accompanied by squinted eyes, lowered eyebrows, and chin down (Figure 18.3b).

a. Indefinite non-manual marking

b. Definite non-manual marking

Figure 18.3  (In)definiteness non-​manual marking in LIS (Mantovan 2017: 174; images © De Gruyter Mouton, reprinted with permission)

2.3  Modulations in signing space The (non-​)specificity distinction is overtly expressed in the use of signing space in LSC.5 Discourse referents that are specific are localized at a low R-​locus. In contrast, discourse referents that are non-​specific are localized at a high R-​locus.6 This is shown in the semi-​ minimal pair found below.7 The discourse referent in (16a) is localized at a low R-​locus (Figure  18.4a). It corresponds to a particular individual, which is identifiable by the signer, and thus triggers a specific interpretation. In contrast, the discourse referent in (16b) is localized at a high R-​locus (Figure 18.4b). It does not correspond to a particular individual (therefore it is not identifiable by the signer), and a non-​specific interpretation arises. (16)  a. 

GROUPlo.a FRIEND SOMElo.a INSIDE IX-​3c HIDE DURING YEAR-​TWO

‘Some of the friends were hidden there for two years.’ (→ specific interpretation) b.

IX-​3pl.up.b SOMEup.b DENOUNCE-​3up.b IX-​3c THERE-​IS

‘Someone denounced they were there.’ (→ non-​specific interpretation) (LSC, Barberà 2015: 162–​164)

410

411

Specificity and definiteness

a. NP localized at a low R-locus

b. NP localized at a high R-locus

Figure 18.4  Two R-​loci articulated on the frontal plane (Barberà 2015: 174, Figures 49 & 50; © De Gruyter Mouton, reprinted with permission)

The articulation of signs directed to the signing space also varies depending on the direction and, more specifically, on the interpretation they receive. Signs directed towards low R-​loci have a tensed realization and are directed towards a particular point in space. In such cases, a specific reading arises. Signs directed to high R-​loci, which correspond to a non-​specific interpretation, are non-​tensed, have a vague realization, and are directed towards a more widespread area rather than a particular spatial location (cf. Barberà (2015) for a distinction between strong and weak localization). This resembles the articulation of R-​loci in ASL to express definiteness. According to MacLaughlin (1997), while the definite determiner in ASL accesses a point in signing space, the indefinite determiner involves an articulatory movement within a small region. A solid grammatical test to distinguish between specific and non-​specific readings is based on the possibility of having a co-​referential pronoun. Only specific NPs establish an R-​locus, which may be referred back to by an anaphoric pronoun in subsequent discourse (Barberà 2016). In contrast, intensional contexts in which the sender is referring to a non-​specific discourse referent allow a co-​referential pronoun as long as they are embedded under an operator, like a modal verb. For the LSC case, NPs localized at a low R-​locus may have a co-​referential pronoun in further discourse, corresponding to a specific interpretation (17). When the NP is localized at a high R-​locus, the co-​referential pronoun alone is not felicitous (18a), as it needs to be embedded under a modal verb, like MUST, and expressed as an overt or as a null pronoun (18b). (17) 

CAT IX-​3lo, IX1 WANT BUY. IX-​3lo LEG BIG CL:‘big-​legs’.

‘I want to buy a certain catspec. It has long legs.’

(18) a.  b.

(LSC, Barberà 2016: 30)

CAT IX-​3up, IX1 WANT BUY. #IX-​3up LEG BIG CL:‘big-​legs’.

‘I want to buy a catnon-​spec. #It has long legs.’ CAT IX-​3up, IX1 WANT BUY. MUST LEG BIG CL:‘big-​legs’. ‘I want to buy a catnon-​spec. It must have long legs.’

411

(LSC, Barberà 2016: 30)

412

Gemma Barberà

18.3  Types of definiteness and specificity 18.3.1  Definiteness: familiarity and uniqueness In most sign languages studied to date, the use of signing space plays a crucial role in representing the referential status of discourse referents. Determiners and the lack of them have an impact in the interpretation of the co-​occurring noun. De Vriendt & Rasquinet (1990) observe that sign languages generally do not make use of determiners in generic NPs. Since the expression of index signs attributes some referential properties to the NP, generic statements do not co-​occur with an index sign, and hence, the entity is not localized in space. In LSC, bare nouns may assume a generic interpretation if they are not localized in space (Barberà & Quer 2015). As shown in the minimal pair below, when the NP is localized in the signing space, it is understood as referential (i.e., as denoting a specific dog, (19a)), rather than generic (19b). (19)  a. 

DOGa CHARACTER OBEDIENT+++

‘That dog is obedient.’ b.

DOG CHARACTER OBEDIENT+++

‘Dogs are obedient.’          

(LSC, Barberà & Quer 2015)

Carlson & Sussman (2005) propose a distinction between strong and weak definites based on the fact that weak definites do have a reference but not a uniquely identifiable one (see also Schwarz 2009). The strong definite in (20a) refers to a particular and specific book, while the weak definite (20b) does not need a uniquely identifiable entity to be understood. (20)  a. I’ll read the book when I get home. b. I’ll read the newspaper when I get home. Based on an experimental setting, Machado de Sá et al. (2012) show that in Brazilian Sign Language (Libras) the definiteness weak/​strong distinction is related to different spatial localization. While strong definites are overtly marked by localizing the corresponding NP in a marked lateral location in signing space (‘determined signing space’, according to their terminology), weak definites are encoded by a lack of localization of the NP in signing space. This neutral use of space is an overt marking of weak definites. For ASL, similar results have been obtained (Irani 2018; based on Schwarz 2009). The strong definite article in ASL is expressed with an NP preceded by an index sign. This is an instance of bridging and corresponds to the notion of familiarity; that is, the definite NP is an indication of anaphoric expression or reference that is shared by sender and addressee. In contrast, the weak definite article in ASL is encoded by a bare NP. This corresponds to the notion of uniqueness; that is, the definite NP denotes a unique referent in the relevant universe.8

18.3.2  Specificity: scope, epistemicity, and partitivity The different kinds of specific indefinites have been extensively discussed in the literature (Von Heusinger 2002, 2011). From among the various types of specific indefinites, the basic three primitives are considered here following Farkas (2002), namely scope, epistemicity, and partitivity.

412

413

Specificity and definiteness

Scopal specificity is considered to distinguish indefinite NPs that are bound to an operator (like a verb of propositional attitude, negation, or a quantifier) from those which are not (Matthewson 1998; Farkas 2002; Ionin 2006). Under the reading in (21a), there is a particular Norwegian woman, and Frank wants to marry her. This corresponds to a wide scope reading, and a specific interpretation arises. Under the reading in (21b), Frank’s desire is to marry a woman who has Norwegian nationality, but he still has not found anyone. This corresponds to a narrow scope reading, and a non-​specific interpretation arises. The indefinite is interpreted inside the modal verb ‘want’. This is why the only felicitous continuation to get a non-​specific reading requires the modal operator ‘will’. (21)  Frank wants to marry a Norwegian. a.  He met her last year. (→ there is a particular Norwegian) b. He will move to Norway to meet someone. In sign languages, scope distinctions may be expressed explicitly with both the use of overt quantifiers and a complex interaction with the signing space. In RSL, for instance, when two or more quantifiers occur within the same clause, scope ambiguities may arise depending on the quantifiers involved (Kimmelman 2017). For instance, when both the subject and the object contain numerals, the cumulative interpretation is the only acceptable one (22a). In contrast, use of the distributive quantifier EVERY and/​or the distributive locations for the object forces a wide scope interpretation of the subject (22b) (‘er’ = eyebrow raise).

(22)  a. 

      er

    er

GIRL THREE PAINT FINISHED 

FLOWER TEN

‘Three girls painted ten flowers.’ (→ cumulative reading only: group of three girls painted a group of ten flowers)                     er

b.

THREE GIRL EVERY IXpl PAINT FINISHED FLOWER TEN-​DISTR

‘Three girls painted ten flowers each.’      

(RSL, Kimmelman 2017: 827)

Moreover, when the subject contains the quantifier EVERY or ALL, and the object is a singular indefinite NP, two different scopes are possible (23a,b). However, if the subject is a bare NP, and the object contains a universal quantifier, then the universal quantifier has to take narrow scope (24).           

(23)



er

VACATION STUDENT EVERY IXpl READ BOOK PUSHKIN POSS

‘During the vacation every student read a book by Pushkin.’ a.  wide scope: one > every, everyone read the same book b. narrow scope: every > one, everyone read one book (RSL, Kimmelman 2017: 828) (24) 

WOMAN READ BOOK ALL

‘A woman read all the books.’ (one > all)

(RSL, Kimmelman 2017: 828)

LSC has two indefinite pronouns, which show a different scope behavior with respect to the adverb TWO TIMES (Barberà & Cabredo Hofherr 2017). While the indefinite pronoun 413

414

Gemma Barberà

only allows wide scope reading (25a), scope readings (25b). ONEup

(25)  a. 

WHO^SOMEup

allows both wide and narrow

ONEup IX1 BIKE 1-​STEAL-​3up+++ TWO TIMES

‘Someone stole my bike two times.’ (someone > 2 times) b.

WHO^SOMEup IX1 BIKE 1-​STEAL-​3up+++ TWO TIMES

‘Someone stole my bike two times.’ i.  someone > 2 times ii.  2 times > someone   (LSC, Barberà & Cabredo Hofherr 2017: 97) However, the interaction of the signing space with the pronoun WHO^SOME disambiguates the two potential readings. In LSC, the establishment of two different R-​loci for the subject explicitly marks the distribution over the subject, resulting in a reading where the indefinite subject co-​varies with the stealing event (narrow scope reading). In (26), the agreement verb STEAL is inflected with two lateral R-​loci, and this triggers a narrow scope reading, namely ‘there were two times in which someone stole my bike’. (26) 

WHO^SOMEup IX1 POSS BIKE 1-​STEAL-​3up.a 1-​STEAL-​3up.b TWO TIMES

‘They stole my bike two times.’ (2 times > someone)               (LSC, Barberà & Cabredo Hofherr 2017: 97) Moreover, in LSC, co-​variation with the event is also possible. With WHO^SOME, when there is quantification over the event (here expressed with the adverb ALWAYS), the subject co-​varies with the event (27a). The stealing event has happened many times, and the subject of each event has been different. In contrast, with the pronoun ONE, there is no co-​variation of the subject with respect to the event (27b). Therefore, the same stealing event is produced by the same non-​specific referent. (27)  a.  b.

BUILDING IX POSS-​1 OFFICE DANGER. WHO^SOMEup STEAL-​3up MONEY ALWAYS.

‘The building of my office is very dangerous. They always steal money.’ BUILDING IX POSS-​1 OFFICE DANGER. ONEup STEAL-​3up MONEY ALWAYS. ‘The building of my office is very dangerous. Someone always steals money.’ (LSC, Barberà & Cabredo Hofherr 2017: 101)

Epistemic specificity, also known as identifiability, is related to the identification of the discourse referent (Fodor & Sag 1982; Kamp & Bende-​Farkas 2006). It is defined as the property of those indefinite NPs that are identifiable by the sender, that is, those entities that are known and/​or inherently identifiable. The example in (28) shows an ambiguous sentence. The reading in (28a) corresponds to an epistemically specific discourse referent, which is thus identifiable by the sender. The reading in (28b) corresponds to an epistemically non-​specific and unidentifiable discourse referent. (28)  A student cheated on the syntax exam. a.  It was the blond lady that always sits in the back row. b. I wonder who it was.

414

415

Specificity and definiteness

Epistemic indefinites have not yet been studied in detail in sign languages. Nevertheless, the field of epistemic modality has recently started to become an attractive research area (Wilcox & Shaffer 2006). Epistemic modality indicates the degree of certainty with which one makes an assertion. It is concerned with the speaker’s attitude towards the actual proposition, judging the truth of the sentence and referring to the probability of the state of affairs or event described by the utterance. Thus, epistemic modality addresses what is known or believed and indicates how much certainty or evidence a speaker has for his utterance. Epistemic modality in sign language is coded by a combination of manual signs and non-​manual markers (Shaffer et al. 2011). Interestingly, for the purposes of this chapter, the non-​manuals resemble some of the marking expressing non-​specific indefiniteness, as presented in Section 18.2.1 As for manual signs, the position of a modal in an utterance corresponds to the modal’s scope and to its role in the discourse (Wilcox & Shaffer 2006). Modals with scope over only the verb appear near the verb, while modals with clausal scope appear near the end of the clause, in the comment of topic-​marked constructions. In epistemic modality, the modal typically appears at the end of the utterance. Moreover, Wilcox & Shaffer (2006) observe that in ASL, certain deontic modals, like SHOULD and POSSIBLE, can also be used to express epistemic meaning. The following example illustrates this for SHOULD. The authors note that the non-​manual markers that accompany the modal are brow furrow and head nod. (29) 

LIBRARY HAVE DEAF LIFE SHOULD

‘The library should have Deaf Life. /​I’m sure the library has Deaf Life.’                     (ASL, Wilcox & Shaffer 2006: 226) In New Zealand Sign Language (NZSL), some frequently observed non-​manual markers of modality include lowered corners of the mouth, raised or furrowed brows, eye squint, shoulder shrug, and the movement of the head and/​or torso backwards or to one side (McKee & Wallingford 2011: 232). As for DGS, the non-​manuals indicating ‘probably’ scope over the entire proposition and include affirmative head nods, a specific mouth pattern, and squinted eyes. Importantly, these non-​manuals may express the epistemic meaning even in the absence of the manual adverbial (Herrmann 2013). For Austrian Sign Language (ÖGS), Lackner (2013) discusses one non-​manual possibility marker in the form of a sideward head tilt and/​or a sideward body lean; the resulting meaning can be paraphrased as ‘maybe’ because it expresses the potentiality/​possibility of an unrealized event. Finally, partitive specificity refers to indefinite NPs that have a restricted set as a possible value. That is, they receive a partitive interpretation when the denotation of the NP is included within a given set (as previously shown by Enç (1991) for Turkish; see example (5)). The partitive and non-​partitive pairs in (30) and (31), respectively, are quite similar in interpretation. The main difference is that in the case of overt partitives (30), the quantification necessarily ranges over some specific, non-​empty, contextually fixed set. (30)  a.  three of the books b. one of the books c. some of the books

415

416

Gemma Barberà

(31) a. three books b. one book c. some books As seen in Section 18.2.3, in LSC, signs may be localized at both a low and a high R-​ locus. When indefinite signs are localized at a low R-​locus, a specific reading arises, which may have a partitive interpretation, where the discourse referents belong to a restricted set. The interpretation of the discourse referents conveyed in (32) is restricted by a particular domain of reference. In contrast, when indefinite signs are localized at high R-​loci and thus establish the NP in a high area, a non-​specific and non-​partitive interpretation arises (33). (32)  a. 

HOUSE SOMElo

‘some of the houses’ b.

HOUSE ONElo

‘one of the houses’ c.

HOUSE ANYlo

‘any of the houses’   (33) a.

(LSC, adapted from Barberà 2015: 181)

HOUSE SOMEup

‘some houses’ b.

HOUSE ONEup

‘one house’ c.

HOUSE ANYup

‘any house’  

(LSC, adapted from Barberà 2015: 181)

ASL overtly marks domain restriction with respect to height in signing space (Davidson & Gagne 2019). When there is a restricted domain, the pronoun is signed low, and the interpretation of the pronoun refers to the entity or entities included in the domain. A high pronoun, in contrast, refers to the maximum set. As shown in the following example, the pronoun may be directed to a low R-​locus (34a) and trigger a partitive interpretation: it refers to the members of the family. A pronoun directed to a mid R-​locus (34b) refers to the members of the nudist colony. The interpretation is still restricted, but to a wider domain. Last, a pronoun may also be directed to a high R-​locus (34c) and refer to the maximum set of entities. (34)  Context: family accidentally visits a nudist colony. She comments: a.  POSS-​1 FAMILY IX-​ARClo WEAR CLOTHES ‘My family, they wear clothes.’ b. IX-​ARCmid NOT WEAR CLOTHES ‘They all (here) don’t wear clothes.’ c. IX-​ARCup WEAR CLOTHES ‘They all (people in general) wear clothes.’ (ASL, Davidson & Gagne 2019) Interestingly, LSC allows collocation between specificity and domain restriction (Barberà 2015). Partitive constructions in LSC may be combined with determiners conveying specific as well as non-​specific discourse referents. In such constructions, the partitive phrase

416

417

Specificity and definiteness

first establishes the domain of quantification. The quantifier sign then conveys the (non-​)specific reading. In (35), the domain of quantification is first established at a low R-​locus (Figure 18.5a), and the specific determiner that ranges over it is uttered afterwards (Figure 18.5b). (35)

BOOK IX-​3pl.lo, IX1 NEED ONElo.

‘I need onespec of those books.’

(LSC, Barberà 2015: 184–​185)

a. IX-3pl.lo (‘those’)

b. ONElo (‘onespec’)

Figure 18.5  Partitive construction with a specific determiner in LSC (Barberà 2015: 185, Figure 51; © De Gruyter Mouton, reprinted with permission)

The combination of a non-​ specific determiner with a partitive construction is grammatical in LSC. In (36), the domain is also first established at a low R-​locus (Figure  18.6a), and afterwards the non-​specific determiner is articulated at a high R-​locus (Figure 18.6b). (36)

BOOK IX-​3pl.lo, IX1 NEED ONEup.

‘I need onenon.spec of those books.’

(LSC, Barberà 2015: 184–​185)

a. IX-3pl.lo (‘those’)

b. ONEup (‘onenon.spec’)

Figure 18.6  Partitive construction with non-​specific determiner in LSC (Barberà 2015: 185, Figure 52; © De Gruyter Mouton, reprinted with permission)

417

418

Gemma Barberà

18.4  Discussion and concluding remarks The present chapter has focused on a broad range of phenomena involving referential uses of manual and non-​manual markers that convey different types of definiteness and specificity. The markers and their simultaneous interaction reveal something deeper about the options that natural sign languages provide for organizing the referential system, which comes with various theoretical implications. The fact that sign languages can overtly distinguish different types of definiteness and specificity shows that they are provided with particular resources to express the different theoretical notions related to reference. The different referential statuses of discourse referents are instantiated with varied markings, mostly based on lexical determiners, the order of signs in the NP, particular non-​manual marking, and the modulation of signs in the signing space. The different markings may be specialized in conveying particular meanings. On the one hand, for instance, the multiple articulators involved in the realization of non-​manual markers have been shown to be determinant in the expression of indefiniteness and in conveying epistemic knowledge of the discourse referent referred to. On the other hand, the complex use of signing space has turned out to be crucial for the distinctions of (non-​)specificity, and more concretely in the different scope behaviors and in partitive contexts. However, a word of caution is in order. As shown in the introduction section, definiteness and specificity are interrelated notions. The results reported in this chapter are not based on minimal pairs and therefore what is reported as a distinction in specificity or definiteness, or one of their subtypes, could potentially be analyzed differently along one of the interrelated dimensions. While this is already the case in spoken language data, for sign language, this aspect is of special concern because the markers discussed may be simultaneously expressed. Determiners, the order of signs, non-​manuals, and the use of signing space are all separate facets of any given NP in a sign language utterance. For the specific case of sign languages, the field would gain a fair awareness of how the particular reference system of a given sign language works through more direct comparisons across languages via minimal pairs or, alternatively, via quantitative overviews of corpora across phenomena in order to get a better sense of which semantic and pragmatic distinctions correspond to which markers. An example of this is the case of the ASL sign SELF/​G, shown in (6), which is meant to mark definiteness. However, definiteness may already be contributed by the possessive pronoun, the non-​manual marking, or the use of signing space. A similar case arises with the wide scope interpretation shown in (22b), where we may wonder whether the reading is triggered by the distributive quantifier EVERY only, by the distributive locations of the numeral TEN-​DISTR, or by the combination of the two. Also, the role of the non-​manuals should be considered in order to disentangle the specialized meaning of each marker. The analysis of sign languages contributes to the theoretical study of definiteness and specificity by providing a perspective of the phenomena incorporating the characteristics afforded by the visual-​gestural modality. Much remains to be said about a precise analysis of index signs and non-​manual markers, for instance. Moreover, it is still under-​ investigated how definiteness and specificity are encoded in shared sign languages, which have a preference for an absolute frame of reference that uses conventional absolute relations (de Vos & Pfau 2015), in contrast with urban sign languages, which have a preference for a relative frame of reference. Yet, it seems certain that a broader cross-​linguistic and cross-​modality perspective on reference contributes substantially to our theoretical understanding in this domain. 418

419

Specificity and definiteness

Notes 1 This chapter uses the term noun phrase (NP). See Abner, Chapter  10, for an analysis of the Determiner Phrase in sign languages. 2 The sign SELF/​G is articulated with a fist with thumb extended on the dominant hand in contact with the non-​dominant hand, which has the index finger extended. We follow Fischer & Johnson 2012[1982] for the name of the gloss. 3 This chapter follows the usual glossing conventions in the sign language literature, representing manual signs by the capitalized word corresponding to the translation of the sign. The abbreviations used in the glosses are the following (# is a placeholder for the loci in signing space corresponding to 1st, 2nd, and 3rd person referents):  IX# (index pointing sign); #-​VERB-​# (verb agreeing with subject and object); sub-​indices mark localization in signing space:  ‘lo’ (low), ‘up’ (up), ‘ip’ (ipsilateral); ‘cl’ (contralateral); ‘c’ (center); lower indexed letters (a, b, …) mark co-​reference relations. Reduplication of signs is indicated by ‘+++’. 4 A discourse referent may be localized at a certain spatial location in signing space and may be referred back to later in the discourse. Such a spatial location associated with an entity is called ‘referential locus’ or ‘R-​locus’ (Lillo-​Martin & Klima 1991). 5 Very recent research shows that high R-​loci also correspond to non-​specificity, impersonal, and arbitrary readings in Turkish Sign Language (Kelepir et al. 2018), French Sign Language (Garcia et al. 2018), and Hong Kong Sign Language (Sze & Tang 2018). 6 The distinction between low vs. high loci analyzed as encoding the referential status of the discourse referents, and more concretely as marking (non-​)specificity, goes beyond the iconic function expressing hierarchical and iconic relations (see Morales-​López et al. (2005) and Barberà (2015) for LSC; Liddell (1990), Schlenker & Lamberton (2012), and Schlenker et al. (2013) for ASL; and Zeshan (2000) for Indo-​Pakistani Sign Language). 7 This semi-​minimal pair is extracted from semi-​spontaneous data rather than elicited data. This is the reason why the minimal pair is not exact. 8 This does not imply that bare nouns in sign language only have a weak definite reading, but rather that this is one possible reading which contrasts with a full NP (IX+noun) expressing a strong definite reading. In fact, bare nouns may also have an indefinite reading in LIS (Mantovan 2017)  and a definite reading in LSC (Barberà & Quer 2018).

References Abbott, Barbara. 1999. Support for a unique theory of definite descriptions. In Tanja Matthews & Devon Strolovitch (eds.), Proceedings from Semantics and Linguistic Theory IX, 115. Ithaca, NY: Cornell University. Bahan, Benjamin. 1996. Nonmanual realization of agreement in American Sign Language. Boston: Boston University PhD dissertation. Bahan, Benjamin, Judy Kegl, Dawn MacLaughlin, & Carol Neidle. 1995. Convergent evidence for the structure of determiner phrases in American Sign Language. In Leslie Gabriele, Debra Hardison, & Robert Westmoreland (eds.), FLSM VI. Proceedings of the Sixth Annual Meeting of the Formal Linguistics Society of Mid-​America, Vol. 2, 1–​12. Bloomington: Indiana University Linguistics Club. Barberà, Gemma. 2015. The meaning of space in sign language. Reference, specificity and structure in Catalan Sign Language discourse. Berlin and Nijmegen: De Gruyter Mouton & Ishara Press. Barberà, Gemma. 2016. Indefiniteness and specificity marking in Catalan Sign Language (LSC). Sign Language & Linguistics 19(1). 1–​36. Barberà, Gemma & Patricia Cabredo Hofherr. 2017. Two indefinite pronouns in Catalan Sign Language (LSC). Proceedings of Sinn und Bedeutung 21 (2016), University of Edinburgh. Barberà, Gemma & Brendan Costello. 2017. ¿Cómo se expresa la referencia impersonal? Análisis contrastivo entre lengua de signos catalana (LSC) y lengua de signos española (LSE). Actas del Congreso CNLSE 2015, 49–​63. Madrid: CNLSE. Barberà, Gemma & Josep Quer. 2013. Impersonal reference in Catalan Sign Language (LSC). In Laurence Meurant, Aurélie Sinte, Mieke van Herreweghe, & Myriam Vermeerbergen (eds.), Sign language research, uses and practices: Crossing views on theoretical and applied sign language linguistics, 237–​258. Berlin and Nijmegen: De Gruyter Mouton & Ishara Press.

419

420

Gemma Barberà Barberà, Gemma & Josep Quer. 2015. Genericity in Catalan Sign Language (LSC). Paper presented at Workshop on Sign Languages and R-​impersonal Pronouns. Université Paris 8, February 6. Barberà, Gemma & Josep Quer. 2018. Nominal referential values of semantic classifiers and role shift in signed narratives. In Annika Hübl & Markus Steinbach (eds), Linguistic foundations of narration in spoken and sign languages, 251–​274. Amsterdam: John Benjamins. Bertone, Carmela. 2009. The syntax of noun modification in Italian Sign language (LIS). University of Venice Working Papers in Linguistics 19. 7–​28. Bhat, Darbhe N.S. 2005. Pronouns. Oxford: Oxford University Press. Carlson, Greg & Rachel Sussman. 2005. Seemingly indefinite definites. In Stephan Kepsar & Marga Reis (eds.), Linguistic evidence, 71–​86. Berlin: De Gruyter Mouton. Chung, Sandra & William A. Ladusaw. 2004. Restriction and saturation. Cambridge, MA: MIT  Press. Conlin, Frances, Paul Hagstrom, & Carol Neidle. 2003. A particle of indefiniteness in American Sign Language. Linguistic Discovery 2(1). 1–​21. Cormier, Kearsy. 2012. Pronouns. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 227–​244. Berlin: De Gruyter Mouton. Dachkovsky, Svetlana & Wendy Sandler. 2009. Visual intonation in the prosody of a sign language. Language and Speech 52(2/​3). 287–​314. Davidson, Kathryn & Deanna Gagne. 2019. “More is up” for domain restriction in ASL. Manuscript, Harvard University and Gallaudet University. de Vos, Connie & Roland Pfau. 2015. Sign language typology:  The contribution of rural sign languages. Annual Review of Linguistics 1. 265–​288. De Vriendt, Sera & Max Rasquinet. 1990. The expression of genericity in sign language. In Siegmund Prillwitz & Tomas Vollhaber (eds.), Current trends in European sign language research:  Proceedings of the 3rd European Congress on Sign Language Research, 249–​255. Hamburg: Signum Verlag. Enç, Mürvet. 1991. The semantics of specificity. Linguistic Inquiry 22(1). 1–​25. Engberg-​Pedersen, Elisabeth. 1993. Space in Danish Sign Language. The semantics and morphosyntax of the use of space in a visual language. Hamburg: Signum-​Verlag. Farkas, Donka. 2002. Specificity distinctions. Journal of Semantics 19. 1–​31. Fischer, Susan & Robert Johnson. 2012[1982]. Nominal markers in ASL. Sign Language & Linguistics 15(2). 243–​250. Fodor, Janet Dean & Ivan A. Sag. 1982. Referential and quantificational indefinites. Linguistics and Philosophy 5. 355–​398. Garcia, Brigitte, Marie-​Anne Sallandre, & Marie-​Thérèse L’Huillier. 2018. Impersonal human reference in French Sign Language. Sign Language & Linguistics 21(2). 307–​333. Geraci, Carlo. 2012. Commentary on “Impersonal reference in Catalan Sign Language”. Workshop on Impersonal Human Pronouns. CNRS/​Paris 8. November 8. Haspelmath, Martin. 1997. Indefinite pronouns. Oxford: Oxford University Press. Heim, Irene. 1982. The semantics of definite and indefinite noun phrases. Amherst, MA: University of Massachusetts PhD dissertation. Herrmann, Annika. 2013. Modal and focus particles in sign languages. A  cross-​linguistic study. Berlin: De Gruyter Mouton. Ionin, Tania. 2006. This is definitely specific: Specificity and definiteness in article systems. Natural Language Semantics 14. 175–​234. Irani, Ava. 2018. On (in)definite expressions in American Sign Language. In Ana Aguilar-​Guevera, Julia Pozas Loyo, & Violeta Vázquez Rojas Maldonado (eds.), Definiteness across languages. Berlin: Language Science Press. Kadmon, Nirit. 1990. Uniqueness. Linguistics and Philosophy 13. 273–​324. Kamp, Hans. 1981. A theory of truth and semantic representation. In Jeroen A.G. Groenendijk, Theo M.V. Janssen, & Martin B.J. Stokhof (eds), Formal methods in the study of language, 227–​ 322. Amsterdam: Mathematical Centre. Kamp, Hans & Ágnes Bende-​Farkas. 2006. Epistemic specificity from a communication-​theoretical perspective. Manuscript, Universität Stuttgart. Kelepir, Meltem, Asli Özkul, & Elvan Tamyürek Özparlak. 2018. Agent-​backgrounding in Turkish Sign Language (TİD). Sign Language & Linguistics 21(2). 257–​283.

420

421

Specificity and definiteness Kimmelman, Vadim. 2014. Information structure in Russian Sign Language and Sign Language of the Netherlands. Amsterdam: University of Amsterdam PhD dissertation. Kimmelman, Vadim. 2017. Quantifiers in Russian Sign Language. In Denis Paperno & Edward L. Keenan (eds.), Handbook of quantifiers in natural languages, Vol. 2, 803–​855. Dordrecht: Springer. Lackner, Andrea. 2013. Linguistic functions of head and body movements in Austrian Sign Language (ÖGS). A corpus-​based analysis. Graz: University of Graz PhD dissertation. Liddell, Scott. 1990. Four functions of a locus: Reexamining the structure of space in ASL. In Ceil Lucas (ed.), Sign language research:  Theoretical issues, 176–​198. Washington, DC:  Gallaudet University Press. Lillo-​Martin, Diane & Edward S. Klima. 1991. Pointing out differences: ASL pronouns in syntactic theory. In Susan Fischer & Patricia Siple (eds.), Theoretical issues in sign language research, Vol. 1: Linguistics, 191–​210. Chicago: University of Chicago Press. Lyons, Christopher. 1999. Definiteness. Cambridge: Cambridge University Press. Machado de Sá, Thaís Maira, Guilherme Lourenço de Souza, Maria Luiza da Cunha Lima, & Elidea Lúcia Almeida Bernardino. 2012. Definiteness in Brazilian Sign Language: A study on weak and strong definites. Revista Virtual de Estudos da Linguagem 10(19). 21–​38. MacLaughlin, Dawn. 1997. The structure of determiner phrases: Evidence from American Sign Language. Boston, MA: Boston University PhD dissertation. Mantovan, Lara. 2017. Nominal modification in Italian Sign Language. Berlin: De Gruyter Mouton. Mathur, Gaurav. 1996. A presuppositionality marker in ASL. Manuscript, MIT. Matthewson, Lisa. 1998. Determiner systems and quantificational strategies: Evidence from Salish. The Hague: Holland Academic Graphics. McKee, Rachel & Sophia Wallingford. 2011. ‘So, well, whatever’: Discourse functions of ‘palm-​up’ in New Zealand Sign Language. Sign Language & Linguistics 14 (2). 213–​247. Morales-​López, Esperanza, Rosa Maria Boldú-​Menasanch, Jesús Amador Alonso-​Rodríguez, Victoria Gras-​ Ferrer, & María Ángeles Rodríguez-​González. 2005. The verbal system of Catalan Sign Language (LSC). Sign Language Studies 5(4). 441–​496. Partee, Barbara. 1970. Opacity, co-​reference, and pronouns. Synthèse 21(3–​4). 359–​385. Pfau, Roland & Markus Steinbach. 2006. Modality-​independent and modality-​specific aspects of grammaticalization in sign languages (Linguistics in Potsdam 24). Potsdam: Universitäts-​Verlag. Available at: http://​opus.kobv.de/​ubp/​volltexte/​2006/​1088/​. Rinfret, Julie. 2009. L’association spatiale du nom en langue des signes québécoise:  formes, fonctions et sens. Montréal: Université du Québec à Montréal PhD dissertation. Roberts, Craige. 2003. Uniqueness in definite noun phrases. Linguistics and Philosophy 26. 287–​350. Schlenker, Philippe & Jonathan Lamberton. 2012. Formal indices and iconicity in ASL. In Maria Aloni, Floris Roelofsen, Galit W. Sassoon, Katrin Schulz, Vadim Kimmelman, & Matthijs Westera (eds.), Proceedings of the 18th Amsterdam Colloquium 2011, LNCS 7218, 1–​11. Berlin and Heidelberg: Springer Verlag. Schlenker, Philippe, Jonathan Lamberton, & Mirko Santoro. 2013. Iconic variables. Linguistics and Philosophy 36(2). 91–​149. Schwarz, Florian. 2009. Two types of definites in natural language. Amherst, MA: University of Massachusetts PhD dissertation. Shaffer, Barbara, Maria Josep Jarque, & Sherman Wilcox. 2011. The expression of modality:  Conversational data from two signed languages. In Márcia Teixeira Nogueira & Maria Fabíola Vasconcelos Lopes (eds.), Modo e modalidade. Gramática, discurso e interaçao, 11–​39. Fortaleza: Edições UFC. Sze, Felix & Gladys Tang. 2018. R-​impersonals in Hong Kong Sign Language. Sign Language & Linguistics 21(2). 284–​306. Tang, Gladys & Felix Sze. 2002. Nominal expressions in Hong Kong Sign Language: Does modality make a difference? In Richard P. Meier, Kearsy Cormier, & David Quinto-​Pozos (eds.), Modality and structure in signed and spoken languages, 296–​320. Cambridge: Cambridge University Press. von Heusinger, Klaus. 2002. Specificity and definiteness in sentence and discourse structure. Journal of Semantics 19. 245–​274. von Heusinger, Klaus. 2011. Specificity. In Klaus von Heusinger, Claudia Maienborn, & Paul Portner (eds.), Semantics: An international handbook of natural language meaning, 1024–​1057. Berlin: De Gruyter Mouton.

421

422

Gemma Barberà Wilbur, Ronnie. 1996. Focus and specificity in ASL structures containing self. Paper presented at the Linguistic Society of America (LSA), Winter Meeting, San Diego, CA. Wilbur, Ronnie. 2008. Complex predicates involving events, time and aspect:  Is this why sign languages look so similar? In Josep Quer (ed.), Signs of the time. Selected papers from TISLR 8, 217–​250. Hamburg: Signum Verlag. Wilcox, Sherman & Barbara Shaffer. 2006. Modality in American Sign Language. In William Frawley (ed.), The expression of modality, 207–​237. Berlin: De Gruyter Mouton. Winston, Elisabeth. 1995. Spatial mapping in comparative discourse frames. In Karen Emmorey & Judy Reilly (eds.), Language, gesture, and space, 87–​114. Mahwah, NJ: Lawrence Erlbaum. Zeshan, Ulrike. 2000. Sign language in Indo-​ Pakistan:  A description of a signed language. Amsterdam: John Benjamins. Zeshan, Ulrike. 2004. Interrogative constructions in signed languages: Cross-​linguistic perspectives. Language 80(1). 7–​39. Zimmer, June & Cynthia Patschke. 1990. A class of determiners in ASL. In Ceil Lucas (ed.), Sign language research: Theoretical issues, 201–​210. Washington, DC: Gallaudet University Press.

422

423

19 QUANTIFICATION Theoretical perspectives Vadim Kimmelman & Josep Quer

19.1  Introduction The study of quantificational expressions is one of the central domains in the field of natural language semantics. Probably every language has means of expressing quantification, but quantifiers in natural languages are not straightforwardly parallel to logical quantifiers. Over the past 30 years a lot of research has been done to exactly describe and explain the properties of quantifiers in diverse languages (see Keenan (2006); Matthewson (2008); Keenan & Paperno (2012); and Paperno & Keenan (2017) for a cross-​linguistic perspective). However, quantification in sign languages is severely understudied. This chapter offers an overview of existing research on this topic, and discusses some ideas for future research. Before proceeding to the discussion of sign languages, the necessary theoretical background is introduced in this section. The most studied type of quantification concerns the so-​called D-​quantifiers, that is, adnominal determiners with quantificational functions, such as English all, each, most, some, etc. A D-​quantifier together with a noun forms a quantifier noun phrase, such as each boy in Each boy entered. The meaning of the quantifier noun phrase (generalized quantifiers) is thus a function which maps properties (e.g., entered) to truth values, and thus the meaning of a quantifier is a function from properties to generalized quantifiers. Keenan (2006) argues that the meaning of quantifiers is better characterized in set theoretical terms, for instance “ALL (A) (B) is true iff A Í B”. In this representation, it also becomes clear that the structure of a quantificational expression has three main parts: the operator –​the quantifier itself (ALL) –​, the restrictor (A), and the nuclear scope (B). There are different semantic types of quantifiers. Some quantifiers (some, many, two) are existential: they can be used in existential contexts, such as There are two cats in the garden. An important subtype of existential quantifiers is cardinal quantifiers, which express cardinality, including numerals. Other quantifiers (all, each) are universal, and they cannot be used in such contexts.1 Often a distinction is made between collective (all) and distributive (each) universal quantifiers. Finally, there are also proportional quantifiers, such as most and half. In addition, complex quantifiers can be built through combination of multiple quantifiers: Some but not more than half of the students came. 423

424

Vadim Kimmelman & Josep Quer

In addition to D-​quantifiers, all natural languages seem to have A-​quantifiers, that is, quantifiers which are not bound to a particular argument, but which are instead combined with predicates and quantify over events. For instance, John always walks to work means that the set of events in which John goes to work is a subset of events in which John walks. Such quantifiers can also be existential (often, never), universal (always), or proportional (mostly, usually), although it is somewhat difficult to differentiate between existential and proportional A-​quantifiers. A-​quantifiers can be adverbial, but they can also be verbal affixes. However, a verbal affix can also quantify over arguments (Evans 1995), so the boundary between D-​and A-​quantifiers is not always straightforward; therefore, we discuss lexical quantifiers (both D-​and A-​) separately from morphological modifiers with quantificational meaning. One of the most important properties of quantifiers in natural languages is that when two or more quantifiers are combined in one sentence, scope ambiguities can arise. For instance, Some teacher graded every paper can either mean that there is one specific teacher who has graded all papers (some scopes over every), or that for every paper, there is a teacher who graded it (every scopes over some). A  scope ambiguity can also arise between a D-​and an A-​quantifier, as in Two boys sang three times. Keenan & Paperno (2012) discussed the results of a typological survey of classifiers in 18 natural languages, and came up with a list of generalizations.2 For instance, they found that all these languages have both D-​and A-​existential quantifiers, and all also have both D-​and A-​universal quantifiers. All languages in their sample also distinguish distributive and collective universal quantifiers. Proportional D-​and A-​quantifiers have been found in all languages as well. Finally, all languages showed some scope ambiguities. There is no principled reason to expect that sign languages would diverge from the generalizations formulated above. Thus, we also expect to find both D-​and A-​quantifiers of different semantic types, as well as scope ambiguities. However, we also expect to find some effects of the visual modality, such as iconicity, or use of space, that have been shown to fundamentally affect both the syntactic and the semantic domains of sign languages (Sandler & Lillo-​Martin (2006); Meier (2012); see also Schlenker (2018) and Schlenker, Chapter 23 for a discussion of modality effects on semantics specifically). Based on the previous research on quantification in sign languages, we further discuss the following issues: lexical quantifiers (Section 19.2), quantificational morphology (Section 19.3), and structural aspects of quantification (Section 19.4). Throughout the chapter we point out to possible modality effects, but we also specifically address them in the last subsection.

19.2  Lexical quantifiers To clarify the terminology, in this section we discuss quantifiers which are lexical, that is, they are separate signs with quantificational meanings. In Section 19.3 we discuss morphological modification which can apply to verbs (but not only verbs) to express quantificational meanings.

19.2.1  D-​quantification It comes as no surprise that sign languages have lexical D-​quantifiers. At least one type of quantifiers is documented for most sign languages, namely numerals, which are usually reported in dictionaries. There is considerable research on numerals in both urban and rural sign languages (see Zeshan et  al. (2013) and references therein). However, 424

425

Quantification

such research is primarily focused on phonology and morphology of numerals, and on the questions of numeral bases and strategies of constructing larger numerals, while questions such as possible scope ambiguities are not discussed. Although it is not a reliable research tool, a quick search of the website www. spreadthesign.com (a multilingual sign language dictionary containing signs from 41 sign languages, as of June 2020) shows that D-​quantifiers of all three major types (universal, existential, and proportional) are well represented. Twenty-​eight languages have an equivalent for the meaning all, 29 languages have a sign for each, 29 languages have an equivalent for some, and 31 have a sign for majority.3 Some papers specifically discuss quantifiers in sign languages. For instance, Petronio (1995) discussed primarily verbal quantification in American Sign Language (ASL) (see Section 19.3), but her examples also include existential quantifiers, such as TWO and MANY. Partee (1995) also discusses a universal quantifier ALL in ASL. A more recent and comprehensive discussion of quantifiers in ASL can be found in Abner & Wilbur (2017), who mention existential quantifiers SOME, SOMEONE, FEW, MANY, and NO, universal quantifiers #ALL (a lexicalized fingerspelling sequence) and EACH, and proportional quantifiers MOST and HALF, among others. For Catalan Sign Language (LSC), Quer (2012) found universal (ALL), existential (SOME), and proportional (MAJORITY) D-​quantifiers.4 Kimmelman (2017) described quantifiers in Russian Sign Language (RSL) based on the questionnaire from Keenan & Paperno (2012). He found that all types of lexical D-​ quantifiers are well represented in RSL. For instance, RSL has existential quantifiers, such as numerals (ONE (1), TWO, etc.), and others: SOME, MANY, NOBODY, NOTHING. It also has both collective (ALL) and distributive (EVERY) universal quantifiers (2). Finally, it has proportional quantifiers, such as HALF (3). (1) 

INDEX1 BUY ORANGE ONE

‘I bought one orange.’ (2)  a.

(RSL, Kimmelman 2017: 811)

ALL BOY LATE

‘All boys were late.’   b.

(RSL, Kimmelman 2017: 826)

EVERY QUESTION

‘Every question.’   (3)

(RSL, Kimmelman 2017: 818)

GIRL HALF SICK

‘Half of the girls were sick.’

(RSL, Kimmelman 2017: 826)

Interestingly, as in many spoken languages, existential quantifiers can be used in existential constructions, while universal quantifiers cannot: consider example (4), where an existential quantifier MANY but not a universal quantifier ALL can be used.      re

(4)

ROOMa INDEXa EXISTa MANY/​(*ALL) BOY

‘There are many boys in the room.’

(RSL, Kimmelman 2017: 821)

In RSL, as in many spoken languages, some complex quantifiers can be built. For instance, numerals can be modified with APPROXIMATELY and EXACTLY (5). In addition, juxtaposition of numerals is also used to express approximate quantity (6). 425

426

Vadim Kimmelman & Josep Quer

(5) 

NEW YEAR CHAMPAGNE PEOPLE DRINK APPROXIMATELY 60 PERCENT

‘Approximately 60% of people drink champagne at New Year.’                 (RSL, Kimmelman 2017: 814) (6)

WALL PICTURE HANG TWO THREE MAXIMUM FOUR

‘There are two or three pictures on the wall, at most four.’                 (RSL, Kimmelman 2017: 815) Keenan & Paperno (2012) also observed that, in spoken languages, A-​quantifiers are sometimes derived from D-​quantifiers, but the opposite pattern is very rare. Kimmelman (2017) confirmed this for RSL: some A-​quantifiers, such as ONE-​TIME ‘once’ seem to be morphologically derived from numeral D-​quantifiers, but there are no examples of D-​ quantifiers derived from A-​quantifiers.

19.2.2  A-​quantification In opposition to D-​quantification, Bach et al. (1995) group other ways of encoding quantification under the label A-​quantification, which includes adverbs, auxiliaries, affixes, and argument-​structure adjusters. They introduce quantification “in a more constructional way” (Partee 1995: 544). Research on these strategies of expressing quantification is limited for sign languages. In this section, adverbial quantifiers are addressed, including those that express generic quantification. For quantification over events yielding different aspectual meanings, see Malaia & Milković, Chapter 9. The idea behind A-​quantification stems from the analysis of indefinites by Kamp (1981) and Heim (1982), according to which indefinite expressions are non-​quantificational, and they just introduce a variable with descriptive content that must be unselectively bound by a quantifier. This quantifier can be the existential closure introduced at discourse level, but also an overt quantifier. An important piece of evidence to support this view is provided by sentences like (7a) and (7c): the sentences with indefinite subjects can be faithfully paraphrased as in (7b) and (7d), with D-​quantifier subjects. The quantificational force has been shown to come from adverbials like always and often, which are called quantificational adverbs or Q-​adverbs, since they are able to unselectively bind open variables in their scope (it has to be assumed that they take sentential scope, which is unproblematic for this kind of quantifiers). Next to Q-​adverbs with universal and proportional quantificational force, there are Q-​adverbs with existential force like sometimes, twice, or never (7e, f). (7)  a.  b. c. d. e. f.

A dog always makes good company. All dogs make good company. A cat often stays home. Most cats stay home. A cat never likes bathing. No cat likes bathing.

Unsurprisingly, sign languages also feature this type of quantificational adverbs, as in the RSL example in (8) or the ASL example in (9) (see also Abner & Wilbur (2017: 37–​40, 44–​45)). 426

427

Quantification

         re

(8)

IX1 WORK WALK ALWAYS1 ON-​FOOT

‘I always go to work on foot.’ (9)

(RSL, Kimmelman 2017: 813)

JOHN ALWAYS LOSE PAPER

‘John always loses his papers.’

(ASL, Braze 2004: 38)

Although the interaction with indefinite expressions has not been extensively addressed, examples are found in the literature. For instance, in LSC the clausal negator NOTHING25 has been shown to be able to bind the empty variable slots in the right context, as in (10).

(10)  a. 

     hs

BRING NOTHING2

‘Nobody brought anything.’

b.



hs  

  hs

YESTERDAY NIGHT COME NOTHING2

‘Noone came last night.’

(LSC, Quer 2012: 85)

A case for which unselective quantificational binding has been described are generic and characterizing/​habitual statements (Krifka et al. 1995). Quer (2012) shows that in LSC, generic statements can be overtly marked with the sign éS (‘it is’) in clause-​final position, as illustrated in (11). It is taken to lexicalize the generic operator GEN. While specific or definite DPs are accompanied by a pointing, arguments interpreted generically are not, as the subject in (11). As an indefinite description, the variable it introduces is unselectively bound by the generic operator.  

(11)

re

LION PREDATE+++ éS

‘The lion is a predator.’

(LSC, Quer 2012: 86)

The same phenomenon can be observed with the covert generic or habitual operator, as illustrated in (12) and (13) for LSC. Example (12) is a prototypical case of a donkey sentence, where the subject and object descriptions in the conditional antecedent are bare and not localized. Example (13) is an example of a characterizing or habitual predication, again with an indefinite subject. In both cases, the bare nouns are unselectively bound by the generic or habitual operator.                re

(12) 

[IF PEASANT HORSE THERE-​BE] ALWAYS TAKE-​CARE

‘If a farmer has a horse, he always takes care of it.’

(LSC, Quer 2012: 85)

           re

(13) 

[FRIEND PERSON COME] IX1 3INVITE1

‘When a friend came, I would treat him/​her.’/​ ‘When a friend comes, I treat him/​her.’

427

(LSC, Quer 2012: 86)

428

Vadim Kimmelman & Josep Quer

Barberà & Quer (2015) note that in a generic statement like (14), the subject bare noun is in principle ambiguous between a bound reading (14a) and a specific one (14b): while in the former case, the subject variable is unselectively bound in the restrictor of GEN, in the latter one, the indefinite variable is existentially closed in the nuclear scope of the generic operator (see Section 19.4.1 for details). (14)

PARROT SPEAK éS

a.  ‘A parrot speaks.’ b.  ‘A certain parrot typically speaks.’

(LSC, Barberà & Quer 2015)

19.3  Quantificational morphology Klima & Bellugi (1979) demonstrated that ASL verbs can undergo a large number of modifications, some of which express aspect, and others appear to quantify over events or the arguments of the verb. So, for instance, they described the following morphemes: [multiple], expressing actions to many and realized as a simple arc-​shaped movement, and [exhaustive], encoding distributed action to each individual in a group and realized as an arc-​shaped movement with reduplication along the arc. Petronio (1995) discussed these morphological markers from the quantificational perspective. She showed that ASL uses bare noun phrases in combination with morphological modification of the verbs. Compare (15a) to (15b): in both sentences, the noun phrase is bare, but in the first case, it is interpreted as singular, while in the second case, it is interpreted as plural and quantified over. Quer (2012) showed that the same is true for LSC, where the [multiple] morpheme is used as a collective universal quantifier and the [exhaustive] morpheme as a distributive one (16).6     re

(15)  a. 

DOCTORa MONEY ANN GIVEa

‘Ann gave money to the doctor.’

(ASL, adapted from Petronio 1995: 610)

       re

b.

STUDENTa BOOK ANN GIVEa-​exhaustive

‘Ann gave a book to each student.’ (16) a. b.

(ASL, adapted from Petronio 1995: 611)

PERSONpl STUDENT INDEX^THREE INDEX1 1ASK3-​multiple

‘I asked the three students.’   (LSC, adapted from Quer 2012: 89) PERSONpl STUDENT INDEX^THREE INDEX1 1ASK3-​exhaustive ‘I asked each of the three students.’   (LSC, adapted from Quer 2012: 89)

Interestingly, the same morphological markers can attach to other elements in the sentence, not just to verbs, as Quer (2012) showed for LSC. Consider (17), where the same marker attaches to the verb ASK, to the numeral ONE, and to the possessive marker POSS.                           re

(17)

STUDENT ONE-​exhaustive TEACHER POSS-​exhaustive ASK-​exhaustive

‘Each student asked his/​her teacher.’ 428

(LSC, adapted from Quer 2012: 90)

429

Quantification

Kimmelman (2015a, 2015b, 2017) demonstrated that this is also true for RSL. In this language, the distributive morpheme can attach to verbs, nouns, numerals, pronouns, and even to the lexical quantifier EVERY, as (18) shows. Thus, it is very difficult to characterize this marking as belonging to either A-​or D-​quantifiers, given its wide compatibility. (18)

EVERY-​exhaustive MAN BUY BEER-​exhaustive

‘Every man bought a beer.’

(RSL, Kimmelman 2015a)

Importantly, both in RSL and LSC, the morpheme can attach to both the distributed share (what is being distributed –​the beers in (18)) and the distributive key (what is being distributed over –​the set of men in (18)). Distributive and collective morphemes are not unattested in spoken languages; for instance, in Mayali quantification over arguments is often expressed through verbal morphemes (Evans 1995). However, it seems much less common (if attested) that the same morpheme would attach to different constituents in the sentence, and to both the distributive key and the distributed share (but see Champollion (2015) for a discussion of some cases in English, German, and Korean, where distributivity is also expressed in multiple places in a sentence; also see Kuhn (2017) for a formal analysis of dependent indefinites in ASL, and Kuhn (2019) for a more general discussion of the phenomenon). In addition to [multiple] and [exhaustive], there are other similar markers of plurality. For instance, Kuhn & Aristodemo (2017) show that in French Sign Language (LSF), there are at least two different markers of verbal plurality, which is also known as pluractionality: /​rep/​, which is a full repetition of the verb, and /​alt/​, which is an alternating repetition with two hands. Interestingly, these markers have different semantics. /​rep/​is used to express distribution of events over time (19a), while /​alt/​is used to express distribution of events over participants (arguments), or over both participants and time, but never over time only (19b,c). (19)  a. 

MY FRIENDS CL-​area FORGOT-​rep BRING CAMERA

‘My friends kept on forgetting to bring a camera.’ OK: several times; each time, all forgot    * : a single time; all forgot /​several times; each time, a different one forgot b. GROUP PEOPLE BOOK GIVE1-​alt ‘A group of people gave me books.’ c. * OFTEN ONE PERSON FORGET-​alt ONE WORD Intended meaning: often one person forgets one word.                   (LSF, Kuhn & Aristodemo 2017: 10–​11) Quer (2019) confirms the existence of the same markers in LSC. Unlike LSF, however, LSC seems to be able to accommodate the interpretation of /​alt/​occurring with singular arguments. In the grammatical LSC example corresponding to (19c), the accommodation can take place through variation in the spatial dimension (next to the temporal one), as illustrated in (20). (20)

JUAN PASSWORD FORGET-​alt

‘Juan forgot the password (in different places).’

(LSC, Quer 2019: 13)

It thus seems that /​alt/​in LSC is more flexible with respect to the parameters against which it can be interpreted. An additional pluractional marker /​rep-​arc/​is also reported 429

430

Vadim Kimmelman & Josep Quer

in LSC. Combined with a motion verb, it yields a reading of plurality of locations, with a secondary temporal reading of plurality of times. This form is the one that has been characterized in the literature as [distributive/​exhaustive], but Quer (2019) shows that in LSC, it does not impose an exhaustive reading of the reduplicative morphology, as the felicitous continuation in (21) shows. The question remains whether the exhaustive reading attested sometimes with these forms derives from pure contextual conditions or not. (21) 

SUPERVISOR SCHOOL GO-​rep-​arc, OTHER/​SOME NOT

‘The supervisor went to some schools, to (some) others he didn’t.’             (LSC, Quer 2019: 14) A large variety of verbal modifiers that express different types of distributive semantics has been found in RSL by Filimonova (2012). In addition to the abovementioned [exhaustive] morpheme, RSL also uses a two-​handed alternating repetition [alt] (similar to LSF), but also a combination of two-​handedness with either an arc-​shaped movement [2-​multiple],7 whereby both hands move along a simple arc without reduplication, or a reduplicated arc-​shaped movement [2-​exhaustive]. Filimonova claimed that two-​handed alternating repetition is often used to express uncontrolled processes (22a), while the [2-​multiple] and [2-​exhaustive] morphemes are used in situations when the subject is plural, and the object is distributed over (22b). It might be the case that [2-​multiple] and [2-​exhaustive] are actually compositional combinations of the [2 hands] morpheme expressing plurality of the subject, and [multiple] or [exhaustive] expressing object plurality or distribution over the object. (22)  a.  b.

SHEEP GROUP ONE WEEK DIE-​alt ALL DESTROY

‘All our sheep died in a week.’   

(RSL, adapted from Filimonova 2012: 86)

CHILDREN INDEXpl ATTENTIVE LOOK-​2-​multiple

[A teacher talking to children:] ‘Children, you have to check [your works] carefully.’             (RSL, adapted from Filimonova 2012: 89) Given the observed cross-​linguistic variation, it is clear that much more research is needed to pin down the (quantificational) properties of (verbal) modification in sign languages, and a lot of further interesting discoveries can be made in this domain of inquiry. Finally, the discussion above concerned quantificational morphology of lexical verbs (plain and agreeing; see Quer, Chapter 5 for details); however, whole-​entity classifier predicates can also express quantification of the arguments. For instance, in ASL, using a semantic classifier with the -​handshape (glossed as CL1) means a singular entity moving (23a), while using a classifier with two extended fingers means that two entities are moving, and using a two-​handed classifier predicate with four fingers extended on each hand (glossed as CL44) can express the meaning of an undetermined plurality of entities (e.g., humans) moving (23b) (Petronio 1995; also see Filimonova (2012) for RSL).     top

(23) a.

go-​toa ‘The man went to the store.’ STOREa, MAN CL1:

   top

b.

go-​toa ‘The men went to the store.’ STOREa, MAN CL44:

(ASL, adapted from Petronio 1995: 614) 430

431

Quantification

In addition to using the handshape, the movement and location in whole-​entity classifier predicates can be used for quantification (Petronio 1995, among others). The predicate can be reduplicated in various points of space to express plurality and location of the argument, as in (24) from DGS, where the whole-​entity predicate is repeated in three different locations. (24)

be-​at-​repa,b,c ‘Books are lying next to each other on the table.’                 (DGS, adapted from Steinbach 2012: 125) TABLE BOOK CLB(flat object):

As with lexical verbs, classifier predicates with quantificational morphology can combine with bare noun phrases not specified for number, such that quantification is expressed only in the predicate (24).

19.4  Structural aspects of quantification Together with lexical and morphological encoding of quantificational meanings, syntactic structure plays a crucial role in the way quantificational readings are conveyed. In this section, tripartite structures, scopal interactions, and spatial encoding of certain aspects of quantification are addressed.

19.4.1  Tripartite structures of quantification In the study of natural language quantification, a proposal has been made to try to ascertain how information-​structural notions like topic and focus interact with the expression of quantificational structures (Partee 1992, 1995; Bach et  al. 1995). Their focus-​sensitivity is central to the approach. The main hypothesis is that the topic/​focus articulation determines the projection of quantificational tripartite structures, namely the semantic structuring of the quantification into Operator-​Restrictor-​Nuclear scope (see Section 19.1). Figure  19.1 illustrates this view in a schematic way, by featuring “a number of hypothesized syntactic, semantic, and pragmatic structures that can be argued to be correlated with each other and with the basic tripartite scheme” (Partee 1995: 545–​546).

Figure 19.1  Tripartite structures generalized (Partee 1992: 163)

431

432

Vadim Kimmelman & Josep Quer

On the basis of LSC data, Quer (2012) proposes that sign languages might tend to encode the tripartition of quantificational structures overtly. The tendency might be related to the fact that in general, they encode information-​structural notions overtly through syntax and non-​manual markers (see Kimmelman & Pfau, Chapter  26). This can be readily observed in generic or habitual/​characterizing sentences such as (12)–​(13) above, repeated here as (25)–​(26) for convenience. The overt Q-​adverb ALWAYS in (25) and the covert generic or habitual operator in (26) are standardly understood to take scope over the whole sentence; the dependent clause in the left periphery is marked with brow raise, as topics are, and it realizes the restrictor of the operator, which unselectively binds the free variables in it, giving rise to quantificational variability effects. The main clause predication constitutes the nuclear scope of the quantificational structure.              

(25)

  re

[IF PEASANT HORSE THERE-​BE] ALWAYS TAKE-​CARE

‘If a farmer has a horse, he always takes care of it.’                 

(26)

(LSC, Quer 2012: 85)

re

[FRIEND PERSON COME] IX1 3INVITE1

‘When a friend came, I would treat him/​her.’ /​ ‘When a friend comes, I treat him/​her.’  

(LSC, Quer 2012: 86)

This tendency is shown to surface in D-​quantification, whereby the restrictor of the quantifier often appears as a topic in the left periphery, detached from the quantifier and marked by brow raise, as in the LSC case in (27).    

(27)

re

STUDENTS MAJORITY EXAM PASS

‘Most students passed the exam.’      

(LSC, Quer 2012: 88)

A related but different case might be the split that is often found in complex wh-​structures, where the wh-​quantifier occurs at the right edge of the clause, and its restrictor appears either as a left-​edge topic marked with brow raise or in situ. This split is exemplified for LSC in (28), where the restrictor of the wh-​phrase appears in situ and the wh-​element has moved to the right (furrowed eyebrows (fe) are the non-​manual marker for content interrogatives in LSC; see Kelepir, Chapter 11). (28)



   

THIS MORNING STUDENT 

COME WHO



fe

‘Which students came this morning?’

(LSC, Quer 2012: 87)

The tendency has been documented as well for quantified sentences in ASL, where the nominal restrictor of a D-​quantifier of different types (universal, existential, numeral) appears either in situ or left-​detached and marked with brow raise, as in (29a). If the restriction appears in situ, the quantifier cannot be left-​detached (29b).     re

(29)  a. 

BOOK I WANT ALL/​SOME/​THREE

‘I want all/​some/​three books.’ 432

433

Quantification

       re

b. * ALL/​SOME/​THREE I WANT BOOK (ASL, adapted from Boster 1996, Partee 1995, and Petronio 1995) Wilbur & Patschke (1999), in trying to explain structures with brow raise marking in ASL, proposed that the non-​manually marked constituent is the restriction of a [-​wh] operator that occupies the A’-​Specifier position of that operator. In this way, they unify the use of topics, focus, relative clauses, conditionals, yes/​no questions, and wh-​clefts in ASL (see also Abner & Wilbur 2017: 51–​52). However, the overt expression of tripartite quantificational structures might be just a tendency rather than a hard-​wired grammar rule in these sign languages. Focusing on RSL, Kimmelman (2015b) establishes that the tripartite structure, albeit attested as in (30a,b), where BOY provides the restrictor of the quantifier all and LATE is the main predicate in the nuclear scope, is not always overtly marked, as in (30c).   re

(30)  a. 

BOY ALL LATE

‘All boys were late.’    re

b. c.

BOY LATE ALL ALL BOY LATE

‘All boys were late.’  

(RSL, Kimmelman 2015b: 129)

He argues that the topicalized restrictor BOY in (30a,b) does not surface for free, and that it induces a partitive reading of the quantified expression: (30a,b) should rather be translated as ‘All of the boys were late’.

19.4.2  Scopal interactions It is well-​known that when two or more quantified expressions co-​occur in the same sentence, they can give rise to scope ambiguities. The possibilities will be determined by the capacity of the quantified expression to take larger scope than the position it occupies in syntax or not, which is determined lexically but also structurally. Thus, an English sentence like (31) can in principle yield the two readings reported in (31a) and (31b): in the (31a) reading, the subject two volunteers has scope over the object every candidate, reflecting the surface hierarchical relationship between the two constituents. However, in the (31b) reading, the object phrase takes scope over the subject. (31) Two volunteers have campaigned for every candidate. a. ‘There are two volunteers such that they have campaigned for all candidates.’ b. ‘For every candidate, there are two possibly different volunteers that have campaigned for him or her.’ Given the different modality of sign languages, the question can be asked whether factors like use of space or some other modality-​specific factors block such possible ambiguities in quantificational readings. Although there is little research on this topic, it has been shown that ambiguous quantificational readings can arise among clause-​mate

433

434

Vadim Kimmelman & Josep Quer

quantifier expressions in a sign language like LSC (Quer & Steinbach 2015:  153), as shown in (32).            re (32) STUDENT NEW GROUPdist PROFESSOR TWO GUIDE a.  ‘There are two professors such that each has shown all the new groups of students around.’ b. ‘For every new group of students, there are two (possibly) different professors that have shown them around.’ (LSC, adapted from Quer & Steinbach 2015: 153) Similar scope ambiguities have been found for RSL, as exemplified in (33) (Kimmelman 2017), where the bare object BOOK can take scope over EVERY (33a) or EVERY scopes over the object noun (33b). Kimmelman notes that the universal quantifiers EVERY and ALL show slightly different possibilities of scope taking with respect to another quantifier depending on their syntactic function (and position), with ALL tending to take narrow scope. (33)

               re VACATION STUDENT EVERY IXpl READ BOOK PUSHKIN POSS

‘During the vacation every student read a book by Pushkin.’ a. everyone read the same book: one > every. b. everyone read one book, maybe different ones: every > one (RSL, Kimmelman 2017: 828) Ambiguity is also noted to arise in questions with a universally quantified subject and a wh-​object, as in (34). (34)

                       re STUDENT EVERY IXpl ANSWER QUESTION WHICH

‘Which question did every student answer?’ a.  everyone answered the same question: one > every b. everyone answered one question, maybe different ones: every > one (RSL, Kimmelman 2017: 828) Bare nouns enter scope interactions with quantified NPs that result in different interpretations, as argued by Petronio (1995) on the basis of the ASL example (35). Out of the blue, the preferred interpretation is (35a), but additional context elicits the two additional interpretations in (35b) and (35c).  

(35)

re

BOOK, TWO STUDENT BUY

a. ‘Two students each bought a book.’ b. ‘Two students together bought a book.’ c.  ‘Two students bought books.’

(ASL, Petronio 1995: 607)

Languages differ, though, in the range of possible interpretations. The corresponding structure in LSC, for instance, only yields reading (35b) in the unmarked case. Readings (35a) and (35c) only obtain with additional morphological marking on the verb (dual 434

435

Quantification

and random reduplication (allocative) morphology), respectively (Quer et al. 2017: 614). Kimmelman (2017) also observes that D-​quantifiers interact with A-​quantifiers in RSL, as attested in (36). (36) 

BOY TWO DANCE THREE-​TIMES

‘Two boys danced three times.’ a.  three times > two boys b. two boys > three times

(RSL)

Nevertheless, structures are also found where no such ambiguities arise. A good example of this is the effect of the distributive morphology (see Section 19.3 above) in an LSC example like (37), where the numeral in the subject phrase, the possessive in the object phrase, and the agreement verb ASK are all marked for distributivity. Localization of the arguments in signing space and directionality of the verb path from and towards the respective locations excludes any type of scope ambiguity: the subject distributes over the object set. (37)

                re STUDENT ONEdist TEACHER POSSdist ASKdist

‘Each student asked his/​her teacher.’

(LSC, Quer 2012: 90)

For similar examples of the use of space for scope disambiguation in ASL, see Abner & Wilbur (2017: 54–​55).

19.4.3  Quantifiers and space The central use of space in the signed modality leads to the expectation that some aspects of quantification will resort to the possibilities afforded by this medium (see Schlenker 2018 and Chapter 23). In fact, a number of results reported in the literature point in this direction. Since some of them are summarized in more detail in other chapters, they are only briefly reported upon in this section. Indefinite expressions in LSC have been shown to distinguish overtly between non-​ specific and specific interpretations (Barberà 2015): whereas ​specific indefinites and definites are articulated on a lower plane, non-​specific indefinites are realized at a higher location, as illustrated in (38) (see Barberà, Chapter 18). A similar interpretive distinction marked by height has been found in Turkish Sign Language (TİD) by Kelepir et al. (2018), with the additional finding that a [±lateral] feature encodes clusivity (inclusive vs. exclusive reference). (38) 

WHO^IX-​3pl.up MONEY 3-​STEAL-​3up

‘Someone stole the money.’

(LSC, Barberà & Quer 2013: 254)

For ASL, Davidson & Gagne (2014) interpret differences in the height of quantifiers as contrasts in the size of sets quantified over: larger sets are articulated higher than smaller sets, which are articulated lower. Thus, the higher the location, the larger the widening of the set, as illustrated in (39). (39)

Context: Signer is discussing her friend getting a nanny for her children. a. IX1 WILL FIND SOMEONELOW ‘I will find someone (among the usual group).’ 435

436

Vadim Kimmelman & Josep Quer

b. 

IX2 MUST FIND SOMEONEHIGH

‘You need to find someone (anyone)!’

(ASL, Davidson & Gagne 2014: 114)

Note that the distinctions are not seen as binary, but with intermediate levels of height. The proposal is not only illustrated with indefinites, but also with plurals and negative quantifiers. Another interesting case, initially described in Schlenker et al. (2013) for ASL and LSF, is the role of space in the possibility of having anaphoric reference to the complement set of a quantifier when a subset has been established for a plural set. When a subset of the larger plural set is not established overtly, anaphorically referring to the complement set of the quantifier is not possible or extremely restricted, as in English (Most students came to class. #They [the students that did not come] stayed home instead). However, if the subset quantified over has been made explicit as in (40a) with a circular movement a within the large plural locus ab for all students (the restrictor set of the quantifier), a subsequent pronoun directed to the complement subset b can pick up the complement set, as illustrated in the felicitous continuation (40b). This anaphoric possibility is excluded in English and can be interpreted as a result of the iconic encoding of the sets in space (for details, see Schlenker, Chapter 23). (40)  a. 

POSS1 STUDENT IX-​arc-​ab MOST IX-​arc-​a

a-​CAME CLASS

‘Most of my students came to class.’ b. IX-​arc-​b b-​STAY HOME ‘They stayed home.’      

(ASL, Schlenker et al. 2013: 98)

Finally, as discussed in Section 19.4.2, spatial localization can be used to disambiguate potentially ambiguous sentences with multiple quantifiers (Quer 2012; Abner & Wilbur 2017). Thus, referential use of space, being one of the major modality-​specific phenomena (Schlenker 2018), is also responsible for some modality-​specific patterns in quantification.

19.5  Conclusions Despite the limited research on quantification in sign languages, it seems safe to conclude that, as expected, the core properties that have been identified cross-​linguistically in spoken languages are clearly present in languages in the visual-​spatial modality: the division of labor between D-​quantification and A-​quantification is well established, the same arrays of quantifiers appear in their lexicons, and scopal interactions among quantifiers follow the usual patterns. However, some facts of the visual-​spatial modality have also been shown to be at play. The possibility to use simultaneous morphemes and signing space favors the marking of verbal plurality (or pluractionality) and distributivity through this means, for instance. Overt encoding of information-​structural notions also interacts with the realization of tripartite quantificational structures. In addition, spatial distinctions have been shown to be at work in the expression of non-​specific expressions or the size of sets involved in quantified phrases. More research is needed in a broader set of sign languages, but the results obtained so far justify the incorporation of sign language results in the comprehensive analysis of quantification in natural languages, as already proposed in Bach et al. (1995) and Paperno & Keenan (2017), for instance.

436

437

Quantification

Acknowledgments Josep Quer’s work on this chapter has been supported by the SIGN-​HUB project (2016–​ 2020), funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 693349. He would also like to thank the Spanish Ministry of Economy, Industry and Competitiveness and FEDER Funds (FFI2015-​68594-​P), as well as the Government of the Generalitat de Catalunya (2017 SGR 1478).

Notes 1 Set-​ theoretically, existential quantifiers are intersective, and universal quantifiers are co-​ intersective (see Keenan (2006) for the definitions). 2 Note that the 18 languages do not represent a balanced typological sample, so the generalizations should be considered with some caution. 3 Note that for some languages on the website, many more signs have been collected than for others, so one cannot conclude that some languages lack these quantifiers altogether based on the website. 4 Although it is debatable whether those are indeed D-​and not A-​quantifiers, according to Quer (2012). 5 Note that despite the gloss, this sign conveys sentential negation and it does not work as a D-​quantifier in the language. It always appears sentence-​finally. 6 Note, however, that in these examples the morphological marker on the verb is combined with a plurally marked NP; this is also possible in ASL (Petronio 1995). 7 These notations were not used by Filimonova (2012), but we introduce them here for uniformity. Superficially, some of these forms look reminiscent of the allocative determinate and indeterminate forms in ASL as reported in Klima & Bellugi (1979), but a proper comparison has not been undertaken.

References Abner, Natasha & Ronnie Wilbur. 2017. Quantifiers in American Sign Language. In Denis Paperno & Edward L. Keenan (eds.), Handbook of quantifiers in natural language:  Volume II, 21–​59. Cham: Springer. Bach, Emmon, Eloise Jelinek, Angelika Kratzer, & Barbara Partee. 1995. Introduction. In Emmon Bach, Eloise Jelinek, Angelika Kratzer, & Barbara Partee (eds.), Quantification in natural languages, 1–​11. Dordrecht: Kluwer. Barberà, Gemma. 2015. The meaning of space in sign language. Reference, specificity and structure in Catalan Sign Language discourse. Berlin: De Gruyter Mouton & Ishara Press. Barberà, Gemma & Josep Quer. 2013. Impersonal reference in Catalan Sign Language (LSC). In Laurence Meurant, Aurélie Sinte, Mieke van Herreweghe, & Myriam Vermeerbergen (eds.), Sign language research, uses and practices: Crossing views on theoretical and applied sign language linguistics, 237–​258. Berlin and Nijmegen: De Gruyter Mouton & Ishara Press. Barberà, Gemma & Josep Quer. 2015. Conveying genericity in Catalan Sign Language (LSC). Presentation at the First SIGGRAM Meeting, Universitat Pompeu Fabra, April 21–​22, 2015. Boster, Charles T. 1996. On the quantifier-​noun phrase split in American Sign Language and the structure of quantified noun phrases. In William H. Emondson & Ronnie B. Wilbur (eds.), International review of sign linguistics, Volume 1, 159–​208. Mahwah, NJ:  Lawrence Erlbaum. Braze, David. 2004. Aspectual inflection, verb raising and object fronting in American Sign Language. Lingua 114(1). 29–​58. Champollion, Lucas. 2015. Every boy bought two sausages each:  Distributivity and dependent numerals. In Ulrike Steindl, Thomas Borer, Huilin Fang, Alfredo García Pardo, Peter Guekguezian, Brian Hsu, Charlie O’Hara, & Iris Chuoying Ouyang (eds.), Proceedings of the 32nd West Coast Conference on Formal Linguistics, 103–​110. Somerville, MA:  Cascadilla Press.

437

438

Vadim Kimmelman & Josep Quer Davidson, Kathryn & Deanna Gagne. 2014. Vertical representation of quantifier domains. In Urtzi Etxeberria, Anamaria Falaus, Aritz Irurtzun, & Bryan Leferman (eds.), Proceedings of Sinn und Bedeutung 18, 110–​127. Evans, Nick. 1995. A-​ quantifiers and scope in Mayali. In Emmon Bach, Eloise Jelinek, Angelika Kratzer, & Barbara Partee (eds.), Quantification in natural languages, 207–​270. Dordrecht: Kluwer. Filimonova, Elizaveta V. 2012. Sredstva vyraženija distributivnoj množestvennosti v russkom žestovom jazyke (Means of expressing distributive plurality in Russian Sign Language). In Olga V. Fedorova (ed.), Russkij žestovij jazyk:  pervaja lingvističeskaja konferentsija (Russian Sign Language: the first linguistic conference), 82–​97. Moscow: Buki Vedi. Heim, Irene. 1982. The semantics of definite and indefinite noun phrases. Amherst: University of Massachusetts PhD dissertation. Kamp, Hans. 1981. A theory of truth and semantic representation. In Jeroen Groenendijk, Theo Janssen, & Martin Stokhof (eds.), Formal methods in the study of language, Part  1, 277–​322. Amsterdam: Mathematisch Centrum. Keenan, Edward L. 2006. Quantifiers: Semantics. In Keith Brown (ed.), Encyclopaedia of language and linguistics Vol. 10, 302–​308. Amsterdam: Elsevier. Keenan, Edward L. & Denis Paperno. 2012. Overview. In Edward L. Keenan & Denis Paperno (eds.), Handbook of quantifiers in natural languages, 941–​949. Dordrecht: Springer. Kelepir, Meltem, Aslı Özkul, & Elvan Tamyürek Özparlak. 2018. Agent-​backgrounding in Turkish Sign Language (TİD). Sign Language & Linguistics 21(2). 258–​284. Kimmelman, Vadim. 2015a. Distributive quantification in Russian Sign Language. Paper presented at Formal and Experimental Advances in Sign Language Theory (FEAST), Barcelona, May 4–​6. Kimmelman, Vadim. 2015b. Quantifiers in RSL:  Distributivity and compositionality. In Peter Arkadiev, Ivan Kapitonov, Yury Lander, Ekaterina Rakhilina, & Sergei Tatevosov (eds.), Donum Semanticum, 121–​134. Moscow: Jazyki Slavjanskoj Kuljtury. Kimmelman, Vadim. 2017. Quantifiers in Russian Sign Language. In Denis Paperno & Edward L. Keenan (eds.), Handbook of quantifiers in natural language: Volume II, 803–​855. Cham: Springer. Klima, Edward S. & Ursula Bellugi. 1979. The signs of language. Cambridge, MA:  Harvard University Press. Krifka, Manfred, Francis Jeffry Pelletier, Gregory N. Carlson, Alice ter Meulen, Gennaro Chierchia, & Godehard Link. 1995. Genericity: An introduction. In Gregory N. Carlson & Francis Jeffry Pelletier (eds.), The generic book, 1–​124. Chicago: University of Chicago Press. Kuhn, Jeremy. 2017. Dependent indefinites:  The view from sign language. Journal of Semantics 34(3). 449–​491. Kuhn, Jeremy. 2019. Pluractionality and distributive numerals. Language and Linguistics Compass 13(2). e12309. Kuhn, Jeremy & Valentina Arsitodemo. 2017. Pluractionality, iconicity, and scope in French Sign Language. Semantics and Pragmatics 10(6). Matthewson, Lisa (ed.). 2008. Quantification: A cross-​linguistic perspective. Bingley: Emerald. Meier, Richard P. 2012. Language and modality. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign Language. An international handbook, 574–​601. Berlin: De Gruyter Mouton. Paperno, Denis & Edward L. Keenan (eds.). 2017. Handbook of quantifiers in natural language: Volume II. Cham: Springer. Partee, Barbara H. 1992. Topic, focus, and quantification. In Steven K. Moore & Adam Zachary Wyner (eds.), Proceedings of SALT I, 159–​189. Ithaca, NY: DMLL Publications. Partee, Barbara H. 1995. Quantificational structures and compositionality. In Emmon Bach, Eloise Jelinek, Angelika Kratzer, & Barbara Partee (eds.), Quantification in natural languages, 541–​601. Dordrecht: Kluwer. Petronio, Karen. 1995. Bare noun phrases, verbs and quantification in ASL. In Emmon Bach, Eloise Jelinek, Angelika Kratzer, & Barbara Partee (eds.), Quantification in natural languages, 603–​618. Dordrecht: Kluwer. Quer, Josep. 2012. Quantificational strategies across language modalities. In Maria Aloni, Vadim Kimmelman, Floris Roelofsen, Galit Sassoon, Katrin Schulz, & Matthijs Westera (eds.), Selected papers from 18th Amsterdam Colloquium, 8291. Berlin: Springer. Quer, Josep. 2019. Reduplication revisited: Verbal plurality and exhaustivity in the visual-​gestural modality. Sensos-​e VI (1). sensos-​e.v6i1.3459.

438

439

Quantification Quer, Josep & Markus Steinbach. 2015. Ambiguities in sign languages. The Linguistic Review 32(1). 143–​165. Quer, Josep, Carlo Cecchetto, Roland Pfau, Caterina Donati, Markus Steinbach, Carlo Geraci, & Meltem Kelepir (scientific directors). 2017. SignGram Blueprint. A  guide to sign language grammar writing. Berlin: De Gruyter Mouton. Sandler, Wendy & Diane Lillo-​ Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press. Schlenker, Philippe. 2018. Visible meaning:  Sign language and the foundations of semantics. Theoretical Linguistics 44(3–​4). 123–​208. Schlenker, Philippe, John Lamberton, & Mirko Santoro. 2013. Iconic variables. Linguistics and Philosophy 36(2). 91–​149. Steinbach, Markus. 2012. Plurality. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 112–​236. Berlin: De Gruyter Mouton. Wilbur, Ronnie B. & Cynthia Patschke. 1999. Syntactic correlates of brow raise in ASL. Sign Language & Linguistics 2(1). 3–​41. Zeshan, Ulrike, César Ernesto Escobedo Delgado, Hasan Dikyuva, Sibaji Panda, & Connie de Vos. 2013. Cardinal numerals in rural sign languages: Approaching cross-​modal typology. Linguistic Typology 17(3). 357–​396.

439

440

20 IMPLICATURES Theoretical and experimental perspectives Kathryn Davidson

20.1  Formal pragmatics and the theory of implicature The study of natural language meaning revolves around the problem of how one person can communicate their thoughts to another successfully, even when there are in principle infinitely many thoughts, many of which the speakers will have never previously attempted to communicate. To do this successfully, two people engaged in a conversation must understand the structure of their language:  how the underlying syntactic structure supports semantic combinations to build meaning for new sentences that speakers may have never heard before. Understanding how this works is the domain of lexical semantics (word level) and compositional semantics (phrase/​sentence level). In addition to the logical meaning, however, two people attempting to communicate new thoughts must also know something beyond the literal, logical, meaning of a sentence. They must know what a given sentence means in a particular context, because most sentences have different meanings in different contexts –​this is the domain of pragmatics. The problem of how language users apply contextual information to literal meanings to finally arrive at the intended meaning, that is, the semantics/​pragmatics interface, is one of the most challenging problems not just in linguistics, but in cognitive science more broadly. This chapter is concerned with one of the most well-​studied phenomena at the semantics/​ pragmatics interface, known as implicature. Owing to the fact that most work in this field has been on a specific type of implicature, scalar implicature, that will be the primary focus of this chapter. The term implicature was first used by the philosopher H.  Paul Grice (1970), who was interested in the semantics/​pragmatics interface problem: what part of the meaning of a sentence is hard-​coded, and what part is context-​dependent? This is a particularly important problem for formal semanticists, following in the tradition of Montague (1973) and his followers tying formal semantics to syntax, who analyze natural language using an underlying system of formal logic. To see why, consider the example of the English word or:  in a formal logic class, or is usually translated using the disjunctive operator (symbolized Ú). In this system, A Ú B is true if A is true, or if B is true, or if both A and B are true. However, in natural language, many people take She drank coffee or tea to mean that she either drank coffee, or she drank tea, but she did not drink both. There is a 440

441

Implicatures

tension, therefore, between the most natural logical translation of some terms in natural language and their actual interpretation by users of languages (in this case, whether or rules out the possibility of both). Grice’s solution to this problem was to keep the meaning of natural language expressions as close as possible to their logical equivalents. In doing this, he argued that much of linguistic meaning comes about through pragmatic reasoning about the intentions of the speaker, after the literal, logical meaning of a sentence has been computed. He suggested that the way interlocutors reason about each other’s language use can be characterized by four maxims of conversation that users expect each other to be following: 1. The maxim of quantity: Be as informative as possible: give as much information as is needed, and no more. 2. The maxim of quality: Be truthful: do not give information that is false or that is not supported by evidence. 3. The maxim of relation: Be relevant: say things that are pertinent to the discussion. 4. The maxim of manner: Be as clear, as brief, and as orderly as one can in what one says, and avoid obscurity and ambiguity. In the case of or, speakers try to be as informative as possible (following the maxim of quantity) while still being truthful (following the maxim of quality). Grice suggested that if a speaker already knows both of the things that or connects to be true (e.g., that Mary drank coffee, and also that Mary drank tea), then they should use a more informative term than or:  they should have used and (i.e., by saying Mary drank coffee and Mary drank tea). Since any English speaker that uses or always has and also at their disposal, then usually when they say or, they are not able to say and without violating the maxim of quality, and so generally, or is interpreted pragmatically as A or B but not both. Thus, or has a basic, logical meaning (where one or the other or both disjuncts can be true, equivalent to Ú) as well as a pragmatically enriched meaning (where one or the other disjunct can be true, but not both). The difference between these meanings, Grice suggested, is due to a scalar implicature: an unsaid implication that use of a weaker term implies that use of the stronger term would have been false. It is possible to see already that Grice’s (and many others’) view of this classic problem at the semantic/​pragmatic interface was English/​Indo-​European-​centric: not all languages have translation equivalents to English and and English or. The Yuman language Maricopa famously uses a simple juxtaposition of noun phrases to mean either conjunction (1a) or disjunction (1b) (Gil 1991: 99), and similarly, use of the same construction or morpheme for both meanings has also been reported for Japanese (Ohori 2004), Warlpiri (Bowler 2014), Cheyenne (Murray 2017), and American Sign Language (ASL) (Davidson 2013: 3) (2). (1)  a.  John-​s Bill-​s v?aawuum. John-​NOM  Bill-​NOM 3.come.PL.FUT ‘John and Bill will come.’  b. John-​s Bill-​s v?aawuumsaa. John-​NOM Bill-​NOM  3.come.PL.FUT.INFER ‘John or Bill will come.’

441

(Maricopa, Gil 1991: 99)

442

Kathryn Davidson

(2)  a.  Context: I know that everyone had exactly one drink at the party. I wonder what Mary chose to drink at the party and ask my friend. She replies: (i)  IXMary DRINK COORD-​L TEA COORD-​L COFFEE. (ii)  IXMary DRINK TEAa COFFEEb.   ‘She (Mary) drank tea or coffee.’ b. Context: I see that Mary looks very caffeinated and sick! I wonder how much caffeine she may have had and ask my friend. She replies: (i)  IXMary DRINK COORD-​L TEA COORD-​L COFFEE. (ii)  IXMary DRINK TEAa COFFEEb.   ‘She (Mary) drank tea and coffee!’   (ASL, Davidson 2013: 3) Example (2) also illustrates that the same strategy being interpreted as and and as or is actually available with multiple forms of syntactic coordination in ASL, both coordination on the hands (examples (2a-​i) and (2b-​i) and Figure  20.1a) and coordination through placement in space (examples (2a-​ii) and (2b-​ii) and Figure  20.1b). Either of these strategies can be used both for conjunction (“and”) and disjunction (“or”) as shown in (2), and can be used for coordinating noun phrases, verb phrases, and even full clauses (Davidson 2013).

COORD-L(INDEX)

COORD-L(MIDDLE)

CUPa

BOWLb

Strategy COORD-shift

Strategy COORD-L (a)

(b)

Figure 20.1  ASL coordination strategies: (a) coord-​L; (b) coord-​shift (© Davidson 2013, reprinted with permission)

The pattern posed by ASL and other languages where conjunction and disjunction can be realized in the same way raises new questions for a Gricean theory of implicature: if a language conveys two separate concepts without the use of two contrasting lexical items, do scalar implicatures still arise? In other words, are users of these languages more likely to get the very “logical” meaning of “or”, that one or the other or maybe both disjunctions is true, when they mean disjunction, given that there is not a stronger contrasting term and? And further, what should be the underlying meaning (the semantics) for these kinds of items: are they just ambiguous and happen to be pronounced the same, or is there a single shared meaning, such that what seems like disjunction or conjunction comes about through the pragmatics? This is important because users of these languages might arrive at a logical meaning for or not because of a lack of implicatures, but just because the same term can also mean and. In what follows, we will take ASL as a case study to see how users of ASL actually interpret these constructions. Before moving on, though, we will discuss one further note about scalar implicatures. If all of this discussion were about the pair of logical words , we might 442

443

Implicatures

consider all of this theoretical machinery about maxims, etc., to be overkill. In fact, though, the same kind of reasoning can be applied to a very wide range of terms that stand in the same entailment relationship to each other (meaning that use of one term is strictly stronger than another, in the way that logical and is strictly stronger than logical or). Other examples include positive quantifiers , as in the English example in (3) and its ASL equivalent in (4), negative quantifiers , numbers , temperatures , verbs , and modals . (3) Mary drank some of the tea. Implicates: Mary did not drink all of the tea. (4)

ixMary drink some tea

Implicates: Mary did not drink all of the tea.

(ASL)

Consider examples (5)  to (7). One signature of implicatures in general, and scalar implicatures in particular, is that they can be canceled, as in the (a)  examples, and strengthened, as in the (b) examples. (5) a. b.

Mary drank coffee or Mary drank tea, but not both. Mary drank coffee or Mary drank tea, and maybe both.

(6) a. b.

Mary drank some of the tea, but not all. Mary drank some of the tea, and maybe all.

(7) a. b.

Mary likes tea, but doesn’t love it. Mary likes tea, and maybe even loves it.

This kind of strengthening and canceling is traditionally a marker of setting off the pragmatic inferences of implicatures from entailments, which are part of the semantic/​ logical meaning of a sentence. Consider examples (8) to (10), where the scalar items (, , ) from (5) to (7) are reversed. Here, the cancelations and strengthenings are nonsensical because the stronger term entails the weaker one, so you cannot defeat (a) or strengthen (b) the result. Thus, the relationship between items in a scale is asymmetrical. (8)

a. b.

#Mary drank coffee and Mary drank tea, but not either. #Mary drank coffee and Mary drank tea, and maybe either.

(9)

a. b.

#Mary drank all of the tea, but not some. #Mary drank all of the tea, and maybe some.

(10)

a. b.

#Mary loves tea, but doesn’t like it. #Mary loves tea, and maybe even likes it.

Traditionally, the difference between the canceling and strengthening ability of the implicature and the inability to do so in entailments served to place the former squarely in the 443

444

Kathryn Davidson

domain of post-​compositional pragmatics, and the latter in the domain of semantics. Recent work, however, has raised the possibility of a more grammatical type of implicature, still able to be canceled and strengthened, but triggered by something in the syntax/​ semantics of the structure, not just in post-​ processing pragmatic reasoning. Chierchia et al. (2012) point out some examples of what they suggest are embedded scalar implicatures, as in (11). (11)

John believes that Mary drank coffee or tea. Implicates: John believes that Mary drank coffee or tea but not both.

In cases like (11), Chierchia et al. argue that the strengthened meaning of or cannot have arisen simply as a result of the processes of reasoning using Grice’s maxims because such maxims should apply to entire utterances, after their logical meaning is computed. In (11), the “stronger” version one must compare to is something smaller than the entire utterance, in fact, it is the clause embedded within the matrix clause, under believe. They take this to suggest that the strengthened version of or cannot always simply arise via two interlocutors following conversational maxims about quantity, but rather that the language must also include operators in the syntax that act specifically to do the work of the implicature in the formal semantics. That is, they suggest that what was thought to be reasoning about how people use language (pragmatic phenomena) are sometimes actually arising from the (literal, logical) language itself, through the presence of what amounts to a “silent only” operator, also known as an exhaustivity operator, making (11) logically equivalent to (11’). (11’)  John believes that Mary drank [only] coffee OR tea. This kind of argumentation has a number of consequences. For example, we saw earlier that scalar implicatures differ from entailments in that they are cancelable. So, what is it that causes a scalar implicature to arise, and why is it optional? A grammatical theory of implicature must deal with this problem, and is thus obligated to postulate linguistic triggers of implicatures. One way to do this is to say that if a particular word belongs to a scale, then the fact that it has scalar alternatives means that the exhaustivity operator can apply. Hence, a word like some combines with the exhaustivity operator in a meaningful way because some has the scalar alternative all, while other words like Mary or drink do not (at least, in a basic context without further information). This raises new questions:  Does the shape of alternatives matter? Do they have to have separate articulations, or just possible separate meanings? Sign languages have the potential to provide answers to some of these questions. We saw above that in ASL, conjunction and disjunction can be expressed by the same type of construction –​be it a construction involving the manual strategy COORD-​L or a construction involving the spatial strategy COORD-​shift. This raises the question of whether they are ambiguous, i.e., and and or just happen to be pronounced alike in ASL (like the English financial and river bank). Another possibility is that they are two translations of a single underlying lexical item that has a more general meaning (something like “collect these into a set”). A third possibility is that another trigger, such as non-​manual marking (movements of the face, torso, etc., not including the hands) provides the distinction; if this is the case, it raises the question of whether non-​manual marking can distinguish two

444

445

Implicatures

items for the purposes of a scale. Davidson (2013) provides theoretical arguments to rule out ambiguity, but the latter two possibilities remain. The following sections address experimental work investigating these issues by looking at the interpretation of scalar words in ASL. Sign languages can shed light on these important questions about scalar implicatures in a number of ways, including the grammatical versus pragmatic debate, as well as what kinds of lexical items can form alternatives for each other. Within the field of sign language research, experimental investigation of scalar implicatures can also help uncover the ways in which the modality of signal transmission –​visual-​gestural vs. oral-​auditory –​influences pragmatic inferences that are drawn, which is currently a very open question. For example, does the use of space in general, and spatial loci in particular, influence inferences about the maxim of quantity? We turn to these issues next.

20.2  Experimental investigations of implicatures In spoken languages, scalar implicatures are one of the experimentally most studied phenomena at the interface of semantics and pragmatics, for a number of good reasons. First, it was discovered that children behave differently from adults when it comes to interpretations of scalar items: they seem to interpret sentences more “logically”, that is, with a weak reading (some as some and maybe all) without the strengthening provided by the scalar implicature (some as some and not all) (Noveck 2001; Papafragou & Musolino 2003; Huang & Snedeker 2009; a.o.). As for sign languages, it is still an open question whether young children learning ASL behave in the same way, although given that the pattern holds in typologically diverse languages, we would expect it to show up in ASL if no other factors are at play. (There has been some investigation on the acquisition of scalar implicatures by deaf adults who are late learners of a sign language, compared to deaf adults who are native signers, which we will return to in Section 20.6.) A second reason why scalar implicatures have been studied so extensively for spoken languages using experiments is that even adults’ interpretation can be manipulated by a variety of factors, including cognitive load (De Neys & Schaeken 2007) and linguistic and extralinguistic context (Bott & Noveck 2004; Grodner et  al. 2010; a.o.). Therefore, scalar implicatures provide an informative area for understanding the relationship between cognition and language meaning. A third reason, important from the perspective of a theoretical linguist, is that the interpretations given by speakers in carefully controlled contexts can help determine the right choice among competing theories of the grammar of scalar implicature. Since Chierchia (2006), Fox (2007), and Chierchia et al. (2012) argued for the possibility that scalar implicatures are grammatical, there has been an explosion of research that aims to determine how adults’ interpretations vary, and whether they form an integrated part of sentence meaning in the same way as other grammatical/​compositional parts. Given the high interest in the psycholinguistics of scalar implicatures within experimental linguistics, it is natural to ask whether the same occurs in sign languages, and whether the visual-​manual modality can help answer any questions that have been raised by experimentalists and theoreticians. So far, ASL seems to be the only sign language with experimental work on scalar implicature, and this has only been studied in adults in a single series of three studies: (i) one study that focuses on modality (Davidson 2014), which will be discussed in the next section; (ii) one study that addresses the issue of

445

446

Kathryn Davidson

alternatives with conjunction and disjunction (Davidson 2013), which will be the focus of Section 20.4; and (iii) one study on the acquisition of late learners, to be discussed in Section 20.5, that begins to extend the study of scalar implicatures in children to questions about the effects of late first language acquisition among deaf adults (Davidson & Mayberry 2015).

20.3  Scalar implicatures in the sign modality A Gricean theory view of scalar implicatures makes universal predictions about natural language pragmatics: in any context in which a language may employ terms that can be strictly ordered through entailment (e.g., , , etc.), use of a weaker term will implicate that use of the stronger term would violate the maxim of quality (i.e., would be false). This should be how any participants in a discussion expect interpretation to proceed if both are following the maxims. Because these maxims are not specific to the spoken language modality, it is reasonable to expect to discover similar behavior in sign languages (Baker & van den Bogaerde 2012). There is, however, very little experimental work on the pragmatics of the visual/​manual use of space for communication –​be it in gesture with spoken languages or in sign languages –​and it is therefore quite possible that there are semantic/​pragmatic phenomena involved specifically with the use of space that may be uncovered in work on scalar implicatures in sign languages. In a first study to address the issue of scalar implicatures in a sign language, Davidson (2014) compared English native speakers’ interpretations of sentences in English to ASL native signers’ interpretations of sentences in ASL in a computer-​based behavioral study. Three types of scales were compared across the two languages, and thus the two modalities, in this study, schematized in Figure 20.2. The first two, quantifiers (Figure 20.2, left) and numerals (Figure 20.2, middle), are two of the most well-​studied scales in English, and so counterparts in ASL acted as controls and were tested in a way that made no use of three-​dimensional space. These scales were hypothesized to lead to similar interpretations between the English and the ASL groups. A third scale was tested that would rely more on the visual-​manual modality: a list of items was produced in order in English (e.g., “There is a candle and a globe”), while the same items were listed and assigned to locations in space in ASL using referential loci (“There is a candle at point a and a globe at point b”) using the COORD-​ shift strategy presented in Section 20.1. Ad hoc scales can be constructed from lists of items, so that if in fact three items were present (e.g., a wallet, a candle, and a globe), but the target sentence only mentioned two of them, then that sentence was being underinformative, similar to using “some” to describe a situation when all is true. Investigating any differences between English speakers’ and ASL signers’ interpretations of such underinformative descriptions could help shed light on the pragmatics of assigning spatial loci in ASL. Controls were created for each underinformative test case:  these controls involved a match between description and picture (“Match controls”) or a Mismatch (presented on the right in Figure  20.2). Each trial in the experiment consisted of a picture and a video (with a native speaker of English or a native signer of ASL, depending on the participant group), and participants were simply asked whether they were “satisfied that the picture matched”, in which case they should press the “smile” button; otherwise, they should press the “frown” button (see bottom of Figure 20.2). 446

447

Implicatures

Figure 20.2  A scheme of the experimental design for the study in Davidson (2014). The numbers (“4”) indicate how many of each trial type were seen by each participant; trials were counterbalanced using the Latin Square method such that each situation (colored cans, number of bears, etc.) only appeared in one trial type per participant. Pictures at bottom show a screenshot of what participants viewed during a single (quantifier) trial (© John Benjamins, reprinted with permission)

447

448

Kathryn Davidson

The results of this study showed that, as expected, the quantifier and number scales elicited scalar implicatures (rejections of underinformative descriptions) in both English and in ASL, with no significant differences between the two groups on either scale. On the Ad Hoc scale, however, where the ASL sentences made use of 3-​dimensional space in assigning items to loci, there were (marginally) more rejections by signers in ASL than by speakers of English in underinformative examples. In other words, when the stimulus stated that there was a candle and a globe (and in ASL assigned these to locations in space), signers in ASL were more likely to reject this description when there was also a wallet than English speakers were. On all control items, English speakers and ASL signers behaved similarly (Table 20.1). Table 20.1  Mean rejection rates for each sentence type in English and in ASL from Davidson (2014), with standard deviations following in parentheses

English

ASL

Quantifiers Numbers Ad Hoc Quantifiers Numbers Ad Hoc

Match (Control)

Mismatch (Control)

Underinformative (Test)

0 0 0 0.19 (0.18) 0 0.09 (0.19)

1 1 1 1 1 1

0.77 (0.38) 0.96 (0.14) 0.54 (0.45) 0.84 (0.23) 1 0.88 (0.35)

This initial study of scalar implicatures in the visual-​gestural modality served two purposes. First, it shows that whatever principles of conversation and language use, such as Grice’s maxims, lead English speakers to reject underinformative descriptions on quantifiers and numbers, the same behavior is found among ASL signers, so there are no indications that these principles differ between spoken and sign languages. Second, there are some indications that when space can be used in ASL coordination, it may have pragmatic consequences. This raises many more questions than answers, and first among these is whether co-​speech gesture accompanying English sentences would have the same result. Hopefully, future research can determine whether this is a property of linguistic loci in particular, or a more general pragmatic use that is also found in co-​speech gesture. Another question this research raises is what happens when a scale looks very different between ASL and English in a way that is unrelated to space? For some answers to this question, we will turn in the next section to experimental work on the interpretation of conjunction and disjunction in ASL.

20.4  Scalar implicatures based on conjunction/​disjunction in ASL Sometimes sign languages can inform natural language linguistics not just because they occur in a different modality, but simply because they provide a rich new source of understudied languages. As we noted above, English has separate words for conjunction (‘and’) and disjunction (‘or’), but other languages may use the same syntactic coordinator to express either of these meanings. As pointed out above, this is the case for at least a few understudied spoken languages, as well as ASL (see (1) and (2)). Davidson (2014) uses this property of ASL to understand the role of scalar 448

449

Implicatures

alternatives in implicature calculation: given that is one of the most studied scales in English (along with quantifiers and numbers, which we just saw behave similarly in ASL and in English), we would expect similar implicature patterns in ASL under normal conditions. However, because ASL has what Davidson calls “general use coordinators” that simply perform syntactic coordination but do not specify the logical relationship of either disjunction or conjunction (see Figure 20.1), it is quite possible that the pattern of implicatures looks very different for the coordination scale in ASL. To test whether this linguistic difference leads to different patterns of implicature, a group of native speakers of English and a group of native signers of ASL (a different group from the previous study) participated in an experiment using the same methods described above to study modality and scalar implicatures. The only difference this time was the scales compared: quantifiers were still a baseline, but now they were compared with coordination. In English, coordination was instantiated with , while in ASL, the same concepts were conveyed through non-​manually distinguished COORD-​shift (Figure 20.3).

CUPa

BOWLb

CUPa

(a)

BOWLb (b)

Figure 20.3  Non-​manual differences between conjunctive interpretations (a) and disjunctive interpretations (b) of the COORD-​shift strategy (© Davidson 2013, reprinted with permission)

The results of this study were surprising and robust: while both groups (English speakers and ASL signers) treated quantifiers in the same way, calculating scalar implicatures in the crucial underinformative test trials, they differed in their treatment of the coordination scale. English speakers calculated scalar implicatures for coordinators just like they did for quantifiers , both at the same rate of 77%. However, ASL signers, who robustly calculated scalar implicatures for quantifiers (at a rate of 90%), did not do so for coordinators, which they only rejected as underinformative (when COORD-​shift(‘or’) was used even though both disjuncts were true) at a rate of 35% (see Table  20.2). This is particularly surprising given that they generally accepted the Match controls (at a rate of 80%) and rejected the Mismatch controls (at a rate of 83%) for the coordination scale, which both also used non-​manual markings, just like in the Test case, so the results could not have been obtained just by ignoring non-​manuals. Taken together, this pattern suggests that while non-​manuals were salient and used for purposes of truth conditions, they did not contribute enough of a contrast to form a pragmatic scale for purposes of scalar implicature calculation. We can conclude, then, that a scale must be formed of two separate lexical items, and that other information (non-​manuals, context, etc.) is not sufficient to provide contrast for the purpose of creating scalar 449

450

Kathryn Davidson

alternatives. Along with other properties of these coordinations, these results suggest a formal semantic analysis of general use coordination that involves a single underspecified semantics for conjunctive and disjunctive interpretations (Davidson 2013). Table 20.2  Mean rejection rates for each sentence type in English and in ASL from Davidson (2013), with standard deviations following in parentheses

English ASL

Quantifiers Coordination Quantifiers Coordination

Match (Control)

Mismatch (Control)

Underinformative (Test)

0 0.02 (0.07) 0.05 (0.16) 0.20 (0.20)

1 0.92 (0.16) 0.98 (0.08) 0.83 (0.17)

0.77 (0.38) 0.77 (0.36) 0.90 (0.13) 0.35 (0.21)

The consequences of this study for the theory of scalar implicature are cross-​ linguistic: the finding was not primarily due to the modality of the language (a sign language), but rather to the fact that ASL differs from English in that it has this particular property of coordination alternatives. Therefore, we would expect that other languages –​ spoken or signed –​which show the same use of general use coordinators would exhibit the same pattern. That said, it does seem to be the case that many sign languages make use of general use coordinators, many of them in the same way, either a list buoy (Liddell 2003) (COORD-​L) or simply juxtaposing coordinates that are associated with referential loci (COORD-​shift) (Tang & Lau 2012). Future research would do well to explore whether the same patterns hold across other sign languages, and how many use a form of coordination of this kind.

20.5  Acquisition of scalar implicatures: theory We have seen so far that adult native speakers of English and adult native signers of ASL show similar rates of scalar implicature calculation in each language for a typical scale like quantifiers. Although differences appear when space becomes involved (in ASL) or scales have a different lexical structure (general use coordination), it seems clear that adults follow the same process for calculation of typical scalar implicatures, in both sign and spoken languages. This should not be taken for granted, as there are some groups who do not show the same pattern of scalar implicature calculation, most notably young preschool-​aged children. One foundational study in scalar implicature acquisition was carried out by Papafragou & Musolino (2003), who provided children with a truth/​felicity judgment task. The experimenter acted out a situation that involved toys in front of the children, and then described the scene; the child participant then either accepted or rejected this description of the scene. In a crucial trial example, an experimenter would show three horses, and all three would jump over a fence. Then the experimenter would comment that “some of the horses jumped over the fence”. Adults would overwhelmingly reject this description of such a situation (at a rate greater than 90%), while children would overwhelmingly accept it (rejecting it less than 20% of the time). The same pattern was seen with : while children would accept “start” as a description of a situation that could also be described by “finish” (e.g., when a character ran an entire race), adults would reject it. 450

451

Implicatures

On numbers (), children were much more likely to reject an underinformative description than in the other scales, though still to a lesser extent than adults. In recent years, much research has been focused on trying to understand why children’s behavior is so different in this domain from that of adults, seemingly more “logical” (less pragmatic) in accepting the literal meaning. One hypothesis is that children simply lack the cognitive processing capacity to do this task in the way that it is presented. Chierchia et al. (2001) and Gualmini et al. (2001) provided children with alternative descriptions and asked which one was better (e.g., in the above case, “some” or “all” of the horses). They found that when explicitly given the choice between a felicitous and an underinformative description, children performed much closer to adult-​like behavior. Katsos & Smith (2010) and Katsos & Bishop (2011) proposed instead that children have a different tolerance for pragmatic infelicity than adults. They argue this based on an experiment in which children are given more than three possible responses, instead of just the usual binary accept/​reject, and can reward a puppet with a small, medium, or large fruit, depending on their evaluation of the response. These authors found that children frequently give a medium-​sized fruit to a puppet who expresses a true but underinformative utterance, suggesting that children do differentiate these both from false examples (for which they give a small fruit) and true and felicitous examples (for which they give the best, large fruit). Finally, Barner et al. (2011) argue that children have the appropriate cognitive capacity and do not have a different tolerance, but merely cannot call up or do not know the appropriate scalar alternatives. They illustrate this with children’s success on ad hoc trials, which are merely contextual scalar alternatives (like the aforementioned ), and for which children behave just like adults (Papafragou & Tantalou 2004; Stiller et al. 2011). One problem that studies of children’s behavior regarding semantics and pragmatics in general, and scalar implicatures in particular, are faced with is that children’s cognitive and social development is inextricably linked to their linguistic development. Just as they are developing the cognitive maturity to hold and consider multiple possibilities at once, they are developing the social norms of their communities that would teach them about appropriate pragmatic tolerance, and they are also developing the linguistic knowledge of what alternatives form scales in their language. It would be helpful, therefore, to be able to dissociate some of these pieces of development from each other. This has been the goal behind recent work on scalar implicatures in children and adults with autism, which has shown no differences between adults with and without autism diagnoses –​a surprise, given young children’s behavior and the seemingly crucial role of pragmatics modeling others’ mental states, which is often taken to be more difficulty for those with autism (Pijnacker et al. 2009; Chevallier et al. 2010). The problem of dissociating underlying causes has also prompted recent studies on scalar implicatures in adult Broca’s aphasics, who have experienced the cognitive and social development of neurotypical adults, but then suffered from a language impairment; this group also shows no significant impairments on scalar implicatures (Kennedy et al. 2015). The next section focuses on yet another group where different aspects of development are dissociated: Deaf adults with delayed first language exposure.

20.6  Scalar implicature and age of first language acquisition: experiment One way to dissociate language experience from cognitive and social maturity would be to look at individuals who learned language at different ages. For example, Slabakova (2010) 451

452

Kathryn Davidson

investigated scalar implicatures in adults learning a second language, namely Korean learners of English. Finding, yet again, mostly success for this group of adults, they were nevertheless not able to conclude whether language exposure was not important, because most scalar items are very easily translated from one language to another. As we saw above, both ASL and English show similar behavior with respect to quantifiers, so it would be easy for adult second language learners of any language to perform the task for most scales in their native language and simply translate back for the purposes of the task. Therefore, the key would be to test individuals who learned a language late in life but did not have a first language to fall back on. Davidson & Mayberry (2015) aimed to do just that through investigating scalar implicatures in Deaf adults who all had ASL as first language, but who were exposed to ASL at varying ages. Figure 20.4 illustrates the heterogeneity in the participants in the study by Davidson & Mayberry (2015). As can be seen, participants included eight native signers (who were exposed to ASL from birth/​age 0), six “early non-​native” signers (who were first exposed to ASL as children), and seven “late non-​native” signers (who were all exposed to ASL after puberty). Although all participants, living in the United States, also had significant exposure to English, all self-​reported that ASL was both their dominant and first language on a variety of measures. Participants were tested in the same experimental paradigm as discussed in Sections 20.3 and 20.4 (and in fact, the native signers were the same group of signers as reported in the coordination study). The goal of the study was two-​fold: (i) to understand the effect of delayed first language acquisition on scalar implicature calculation; and (ii) to understand how delayed first language can effect semantic/​ pragmatic competence, given the long line of research showing that deaf signers exposed to a sign language at an earlier age outperform peers exposed to sign at later ages on a wide variety of language measures, both in sign languages and in subsequently learned spoken languages (Mayberry et al. 2002; Boudreault & Mayberry 2006; a.o.). This study was the first to target the level of semantics and pragmatics.

Figure 20.4  The age at time of testing, and age of first exposure to ASL, of the participants in Davidson & Mayberry (2015; © Taylor and Francis, reprinted with permission)

452

453

Implicatures

Results showed that early native signers were not significantly different from early non-​native signers, so late non-​native signers (N=7) were compared to early (both native and non-​native) signers as a single group (N=14). These two groups generally performed similarly across a wide variety of implicatures, with both showing more success than children typically do on such a task. However, there was one area in which late learners did not calculate as many implicatures as native/​early signers, and that was on the Test case in the quantifiers scale (recall, this is the prototypical case of scalar implicature ). In other underinformative cases, like the spatial ad hoc scale and the coordination scale, both of which had ASL-​specific properties that could not have been transferred from English (the use of space and the underdetermined scale, respectively), the two groups showed no differences. This suggests that all participants were indeed proficient and dominant in ASL, and yet, there seems to be a slight advantage for native/​ early signers in some cases: they showed the strongest rates of scalar implicature calculation in places that children are known to especially struggle with (e.g., scalar quantifiers). Table 20.3  Mean rejection rates for each sentence type in ASL by each group of signers from Davidson & Mayberry (2015), with standard deviations following in parentheses

Early/​Native Learners

Quantifiers Spatial Coordination Late Learners Quantifiers Spatial Coordination

Match (Control)

Mismatch (Control)

Underinformative (Test)

0.08 (0.18) 0 0.16 (0.18) 0.14 (0.20) 0.04 (0.09) 0.14 (0.13)

0.95 (0.10) 1 0.88 (0.16) 1 1 0.79 (0.17)

0.94 (0.11) 0.98 (0.06) 0.39 (0.26) 0.75 (0.29) 1 0.32 (0.19)

The results from this study point toward some influence of the still developing language component on children’s difficulty with scalar implicature calculation, although these adults never performed like children in accepting a majority of underinformative descriptions. This suggests, then, that at least some part of the difficulties that children have does lie in being children with regards to cognitive and social development. With regard to modules of language affected by delayed first language exposure, semantic/​ pragmatic effects seem to be present but limited, and especially limited to areas where learned lexical knowledge (e.g., the existence of the scalar contrast ) plays a crucial role; when deduction within context is enough (e.g., in the spatial ad hoc case), the differences between groups disappear. Certainly, more work should be done to follow up and directly compare different levels of language exposure and development, and age of participants participating in the study. It would also be helpful to include direct comparisons to other atypical language development patterns, which can help illuminate both the semantic/​pragmatic skills of these groups further, and also understand what occurs in typical language development both in sign and in speech.

20.7  Other implicatures based on modality The focus of this chapter has so far been nearly exclusively on one type of implicature, scalar implicature, based on Grice’s maxim of quantity. However, sign languages are also an extremely fertile ground for testing and understanding more about conversational 453

454

Kathryn Davidson

maxims generally and other kinds of implicatures; in return, such pragmatic investigations show immense promise for providing new insights into longstanding puzzles in sign linguistics. One example of an avenue for future research comes from the interaction of the language modality with the maxims of manner and quality. Davidson (2015) discusses a formal semantics for classifier predicates, which along with a discrete morphological handshape can encode analog/​gradient information about the location and movement of objects (Emmorey & Herzig (2003); Zucchi et al. (2011); see Zwitserlood (2012) for overview of classifiers; also see Tang et al. (Chapter 7) and Zwitserlood (Chapter 8)). In Davidson’s proposed semantics, the analog/​gradient information is conveyed through a demonstration, the level of detail of which is pragmatically determined. For illustration, consider the ASL example in (12), which involves each hand using an upright human classifier handshape (CL-​1, index finger extended), in a neutral location, moving toward each other. The interpretation is that two people walked or went toward each other, and the default assumption is that it was a straight path, but this is only implicated pragmatically. If in fact they moved in a zigzag pattern, this can be added information; in this case, there is no need for the interlocutor to deny the original statement (12a). However, if a marked manner or path is used, as in (12b), where the path is given as zigzag, then if the interlocutor disagrees, they must actually deny the original description (12b). (12)  a.  Right hand: CL-​1(straight movement toward center) Left hand: CL-​1(straight movement toward center) ‘The two people walked toward each other.’ #Response:   ‘No, they went toward each other in a zigzag pattern!’ #Response:  ‘And actually, they went toward each other in a zigzag pattern!’ b. Right hand: CL-​1(zigzag movement toward center) Left hand: CL-​1(zigzag movement toward center) ‘The individuals walked toward each other in a zigzag pattern.’ Response:    ‘No, they went straight toward each other!’

(ASL)

Much more detailed work should be done on the formal pragmatics of classifier constructions, and on what kind of information is asserted, implicated, and presupposed; this is merely a suggested beginning for an avenue for future research. New findings in this area would not only break open new understanding about classifiers, but also may raise new questions about how similar or different the inferential process is for their gradient spatial counterparts in co-​speech gesture, thus helping to inform the longstanding puzzle of the relationship between sign, speech, and gesture.

20.8  Conclusions Nearly all of the studies discussed in this chapter form a single path of research, conducted within communities using ASL and English, and on a single type of implicature (scalar implicatures). It is mostly because this is a new and growing field of research within spoken languages, especially at the experimental level, that there remain a myriad of paths for sign language research to follow up on this work both in scope (across languages and types of implicatures) and in depth (with larger populations and more complex stimuli). The value of a strong theory of semantics and pragmatics for a more 454

455

Implicatures

general theory of natural language should be clear: understanding how language works means understanding how it allows us to communicate infinitely many thoughts with finite means. Sign languages have been shown to have the same combinatorial finite means available to them as spoken languages, and so the question is as relevant for sign as for speech: how does the context contribute to the meaning of these parts? Moreover, sign languages may have additional means that have yet to be studied to a significant degree in spoken languages, such as the gradient/​analog means of communication available in some parts of certain signs (e.g., the movement and locations in classifier predicates) and in the gestures that accompany speech; understanding how these work in a three-​ dimensional spatial system opens vast new areas of research that show great promise for providing a more complete picture of language as a communication system. Adding to this typological differences (e.g., general use coordination in ASL) and language acquisition patterns makes studying implicatures in sign languages an important contribution to natural language linguistics and psycholinguistics. Finally, and most importantly, the semantic/​pragmatic interface has received significant attention by formal sign linguists only very recently, and it has the potential to provide important information to theorists as well as users and educators about the development of language and meaning in Deaf signers with a variety of language backgrounds. This level of linguistic analysis should not be ignored, and the tools of formal pragmatics, founded by Grice and developed over the decades since by many others, can provide a valuable source for new information about sign language and language development.

References Baker, Anne & Beppie van den Bogaerde. 2012. Communicative interaction. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 489–​512. Berlin: De Gruyter Mouton. Barner David, Neon Brooks, & Alan Bale. 2011. Accessing the unsaid: The role of scalar alternatives in children’s pragmatic inference. Cognition 118(1). 84–​93. Bott, Lewis & Ira A. Noveck. 2004. Some utterances are underinformative:  The onset and time course of scalar inferences. Journal of Memory and Language 51(3). 437–​457. Boudreault, Patrick & Rachel I. Mayberry. 2006. Grammatical processing in American Sign Language: Age of first-​language acquisition effects in relation to syntactic structure. Language and Cognitive Processes 21. 608–​635. Bowler, Margit. 2014. Conjunction and disjunction in a language without ‘and’. In Todd Snider, Sarah D’Antonio, & Mia Weigand (eds.), Proceedings of Semantics and Linguistic Theory (Vol. 24), 137–​155. LSA and CLC. Chevallier, Coralie, Deirdre Wilson, Francesca Happé, & Ira Noveck. 2010. Scalar inferences in autism spectrum disorders. Journal of Autism and Developmental Disorders 40(9). 1104–​1117. Chierchia, Gennaro. 2006. Broaden your views: Implicatures of domain widening and the “logicality” of language. Linguistic Inquiry 37(4). 535–​590. Chierchia Gennaro, Danny Fox, & Benjamin Spector. 2012. Scalar implicature as a grammatical phenomenon. In Claudia Maienborn, Klaus von Heusinger, & Paul Portner (eds.), Semantics: An international handbook of natural language meaning, 2297–​2332. Berlin: Mouton de Gruyter. Chierchia Gennaro, Stephen Crain, Maria Teresa Guasti, Andrea Gualmini, & Luisa Meroni. 2001. The acquisition of disjunction: Evidence for a grammatical view of scalar implicatures. In Anna H.J. Do, Laura Domínguez, & Aimee Johansen (eds.), Proceedings of the 25th Annual Boston University Conference on Language Development [BUCLD  25], 157–​168. Somerville, MA: Cascadilla Press Davidson, Kathryn. 2013. ‘And’ or ‘or’: General use coordination in ASL. Semantics and Pragmatics 6(4).  1–​44.

455

456

Kathryn Davidson Davidson, Kathryn. 2014. Scalar implicatures in a signed language. Sign Language & Linguistics 17(1).  1–​19. Davidson, Kathryn. 2015. Quotation, demonstration, and iconicity. Linguistics and Philosophy 38(6). 477–​520. Davidson, Kathryn & Rachel I. Mayberry. 2015. Do adults show an effect of delayed first language acquisition when calculating scalar implicatures? Language Acquisition 22(4). 329–​354. De Neys, Wim & Walter Schaeken. 2007. When people are more logical under cognitive load: Dual task impact on scalar implicature. Experimental Psychology 54(2). 128–​133. Emmorey, Karen & Melissa Herzig. 2003. Categorical versus gradient properties of classifier constructions in ASL. In Karen Emmorey (ed.), Perspectives on classifier constructions in signed languages, 222–​246. Mahwah, NJ: Lawrence Erlbaum. Fox, Danny. 2007. Free choice disjunction and the theory of scalar implicatures. In Uli Sauerland & Penka Stateva (eds.), Presupposition and implicature in compositional semantics, 71–​120. New York: Palgrave Macmillan. Gil, David. 1991. Aristotle goes to Arizona, and finds a language without and. In Dietmar Zaefferer (ed.), Semantics universals and universal semantics, 96–​130. Berlin: Walter de Gruyter. Grice, H. Paul. 1970. Logic and conversation. In Paul H. Grice. 1989. Studies in the way of words, 41–​58. Cambridge, MA: Harvard University Press. Grodner, Daniel J., Natalie M. Klein, Kathleen M. Carbary, & Michael K. Tanenhaus. 2010. “Some”, and possibly all, scalar inferences are not delayed: Evidence for immediate pragmatic enrichment. Cognition 116(1). 42–​55. Gualmini, Andrea, Stephen Crain, Luisa Meroni, Gennaro Chierchia, & Maria Teresa Guasti. 2001. At the semantics/​pragmatics interface in child language. Proceedings of Semantics and Linguistic Theory XI. Ithaca, NY: CLC Publications. Huang, Yi Ting & Jesse Snedeker. 2009. Semantic meaning and pragmatic interpretation in 5-​ year-​olds: Evidence from real-​time spoken language comprehension. Developmental Psychology 45(6). 1723. Katsos, Napoleon & Dorothy V.M. Bishop. 2011. Pragmatic tolerance: Implications for the acquisition of informativeness and implicature. Cognition 120(1). 67–​81. Katsos, Napoleon & Nafsika Smith. 2010. Pragmatic tolerance and speaker-​comprehender asymmetries. In Katie Franich, Kate M. Iserman, & Lauren L. Keil (eds.), Proceedings of the 34th Annual Boston University Conference on Language Development [BUCLD  34], 221–​232. Somerville, MA: Cascadilla Press. Kennedy, Lynda, Jacopo Romoli, Cory Bill, Raffaella Folli, & Stephen Crain. 2015. Presuppositions vs. scalar implicatures: The view from Broca’s aphasia. In Thuy Bui & Deniz Özyıldız (eds.), Proceedings of NELS 45 (vol. 2), 97–​110. Amherst, MA: GSLA Publications. Liddell, Scott. 2003. Grammar, gesture, and meaning in American Sign Language. Cambridge: Cambridge University Press. Mayberry, Rachel I., Elizabeth Lock, & Hena Kazmi. 2002. Linguistic ability and early language exposure. Nature 417. 38. Montague, Richard. 1973. The proper treatment of quantification in ordinary English, 221–​242. Dordrecht: Springer. Murray, Sarah E. 2017. Cheyenne connectives. Papers of the 45th Algonquian Conference (PAC45), SUNY Press. Noveck, Ira A. 2001. When children are more logical than adults: Experimental investigations of scalar implicature. Cognition 78(2). 165–​188. Ohori, Toshio. 2004. Coordination in mentalese. Typological Studies in Language 58. 41–​66. Papafragou, Anna & Julien Musolino. 2003. Scalar implicatures: Experiments at the semantics–​ pragmatics interface. Cognition 86(3). 253–​282. Papafragou, Anna & Niki Tantalou. 2004. Children’s computation of implicatures. Language Acquisition 12. 71–​82. Pijnacker, Judith, Peter Hagoort, Jan Buitelaar, Jan-​Pieter Teunisse, & Bart Geurts. 2009. Pragmatic inferences in high-​functioning adults with autism and Asperger syndrome. Journal of Autism and Developmental Disorders 39(4). 607–​618. Slabakova, Roumyana. 2010. Scalar implicatures in second language acquisition. Lingua 120(10). 2444–​2462.

456

457

Implicatures Stiller, Alex, Noah D. Goodman, & Michael C. Frank. 2011. Ad-​hoc scalar implicature in adults and children. Proceedings of the 33rd Annual Meeting of the Cognitive Science Society, 2134–​2139. Tang, Gladys & Prudence Lau. 2012. Coordination and subordination. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook (HSK –​Handbooks of Linguistics and Communication Science), 340–​365. Berlin: De Gruyter Mouton. Zucchi, Sandro, Carlo Cecchetto, & Carlo Geraci. 2011. Event descriptions and classifier predicates in sign languages. Presentation at Formal and Experimental Advances in Sign Language Theory (FEAST 1), Venice, June 2011. Zwitserlood, Inge. 2012. Classifiers. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook (HSK  –​Handbooks of Linguistics and Communication Science), 158–​186. Berlin: De Gruyter Mouton.

457

458

21 DISCOURSE ANAPHORA Theoretical perspectives Jeremy Kuhn

21.1  Setting the stage The study of pronouns and anaphora has been integral to the study of formal semantics, giving a variety of insights into the logic underlying natural language. In the values that they can take, pronouns reveal the primitive semantic objects that natural language can make reference to. In the long-​distance logical relationship that holds between a pronoun and its antecedent, they give insight into the architecture of the compositional system. The sign language modality provides unique advantages to the study of pronouns and discourse anaphora. Most notably, through the use of space, many sign languages allow the connection between a pronoun and its antecedent to be made phonologically overt: noun phrases (e.g., John, someone, etc.) may be placed at locations in space (‘loci’); pronouns then can refer back to an antecedent by literally pointing at the locus where the antecedent was indexed. As a result, sentences that would be ambiguous in spoken language can be disambiguated in sign language. The American Sign Language (ASL) sentence in (1) provides a simple example. (1) 

JOHNa TELL BILLb IX-​a WILL WIN.

a. =  ‘John told Bill that John will win.’ b. ≠ ‘John told Bill that Bill will win.’

(ASL)

In (1), the pronoun IX-​a points to the locus that was established by JOHN; thus, unlike the parallel English example (John told Bill that he would win), the pronoun unambiguously refers to John. Replacing IX-​a with IX-​b results in the opposite interpretation. Rich semantic theories have been built to account for discourse anaphora in spoken language, encompassing quantificational binding within a single sentence and ‘dynamic binding’ across sentences. In this chapter, I will discuss the sign language contributions to these theories. As we will see, data from sign language will bear on a number of classic and recent debates, including variable-​ful vs. variable-​free meanings for pronouns and E-​type theories vs. dynamic theories for cross-​sentential binding. The sign language data will also motivate new questions about the semantic system, in particular with respect to the status of iconic forms within the formal grammar. 458

459

Discourse anaphora

The chapter is laid out as follows:  Section 21.2 establishes that pronouns in sign language and spoken language are fundamentally part of the same abstract pronominal system, an essential step if we wish to use data from one modality to bear on the other. Of particular semantic note, we review a wide literature showing that bound readings of pronouns have been established across many sign languages. More generally, we are left with quite a robust generalization that, modulo the use of space, patterns of pronouns in sign language are exactly like those we are familiar with from spoken language. Grounded on the finding that sign language pronouns should be analyzed within the same system as spoken language pronouns, Section 21.3 asks how the use of space should be incorporated into these formal models. We review both variable-​based and feature-​based approaches to the use of space, concluding that loci must be at least partially featural in nature. We then turn to the iconic use of space. While iconicity is present to a limited extent in spoken languages, the visual modality provides a much richer domain in which to test how iconic information is incorporated into a logical grammar. Perhaps the largest theoretical shift in semantic theory has been to the shift towards theories of dynamic semantics (including Discourse Representation Theory), in which sentence meanings are conceptualized not as static forms with truth conditions, but as dynamic operations that change the discourse context itself (Kamp 1981; Heim 1982; Groenendijk & Stokhof 1991; Dekker 1993; Muskens 1996). In Section 21.4, we turn to sign language contributions to debates about dynamic semantics. There are some topics that are not discussed in detail below. First, the iconic use of space for sign language pronouns shows deep parallels with the use of space in gestural pointing. This has yielded a productive branch of research in which sign language pronouns are taken to incorporate a gestural component. See Schembri et al. (2018) for a recent overview; note that these theories are not incompatible with the grammatical analyses described below (cf. Schlenker et  al. 2013). Second, in dynamic theories, an important pragmatic question regards the salience of the available discourse referents, and the degree to which they are accessible using different kinds of anaphoric expressions. The status of this accessibility hierarchy has been investigated for sign languages; see, for example, Barberà & Quer (2018). For empirical work on anaphora resolution in the absence of overt localization, both in comprehension and production, see Nuhbalaoglu (2018); for more general, experimental perspectives on the use of space, see Perniss, Chapter 17.

21.2  The same system A precondition for using sign language data to bear on theories of pronouns for spoken language –​or vice versa –​is establishing that pronouns in sign language and pronouns in spoken language are indeed part of the same abstract pronominal system. In this section, we show that this is the case, summarizing descriptive work of the syntax and semantics of pronouns. It is important to note that, given the similarity of the indexical sign IX to pointing gestures that can co-​occur with spoken language, this answer is by no means obvious a priori. Yet, this is indeed what we find, in quite a compelling form: modulo the use of space, pronouns in sign language show exactly the same complex patterns as we see in spoken language.

459

460

Jeremy Kuhn

21.2.1  Syntax Syntactically, pronouns in spoken language are characterized by a range of constraints on distribution and co-​reference. These include Binding Theory conditions, crossover effects, and resumptive uses for island extraction. Each of these patterns has been shown to be attested in some form in sign languages. Conditions A and B are generalizations about the distribution of pronouns (he and him in English) and anaphors (himself in English). Broadly speaking, pronouns cannot be bound by an NP in the same local domain (Condition B); anaphors must be (Condition A) –​see Chomsky (1981). Sandler & Lillo-​Martin (2006), and Koulidobrova (2009) show that related generalizations hold for the pronominals IX and SELF in ASL, as shown in (2)  and (3), respectively. The constraints on the reflexive SELF in subject position are weaker than in English, but Koulidobrova (2009) argues that cases of ‘non-​local binding’ are in fact due to local binding by a null pronoun, evidenced in part by a marked, ‘intensive’ interpretation. (2)  Condition B a.  * JOHNa LIKE IX-​a. b. JOHNa LIKE SELF-​a. ‘John likes himself.’      (ASL) (3)

Condition A a.  MARYa THINK JOHNb KNOW PEDROc LIKE SELF-​{*a,*b, c}. ‘Mary thinks John knows Pedro likes himself.’ (ASL, Koulidobrova 2009: 2)

In general, a binder must appear at a structurally higher position than the pronoun it binds. ‘Crossover’ (both strong and weak) describes the fact that this cannot be resolved by movement of the binder to a higher node, as in wh-​question formation (Postal 1971). For example, note that in the spell-​out of the English sentence in (4), the NP which boy linearly precedes and is structurally higher than the pronoun he, yet still cannot bind it. (4)

Which boy did he think _​_​would win? Unavailable reading: ‘Which boy x is such that x thought that x would win?’

Similar results have been shown to hold for sign language. Lillo-​Martin (1991), Sandler & Lillo-​Martin (2006), and Schlenker & Mathur (2013) report crossover effects for ASL; Santoro & Geraci (2013) report similar facts for Italian Sign Language (LIS). An example with wh-​movement is given in (5). (Here, the sentence is ungrammatical because the spatial co-​indexation of WHO and IX focuses interpretation on the unavailable reading. Note that WHO can bind pronouns in non-​crossover cases.) (5)  *

WHO-​CL-​a IX-​a THINK MARY LOVE

_​_​ NO-​MATTER WHAT? ‘Who does he think Mary will love _​_​no matter what?’ Intended: ‘Which person x is such that x thinks that Mary loves x no matter what?’             (ASL, Schlenker & Mathur 2013: 17)

In spoken languages like English, there are syntactic constraints against extracting a noun phrase from certain structural positions. However, in many languages, adding a 460

461

Discourse anaphora

pronoun at the extraction site often has the effect of rescuing the grammaticality of the sentence. In such cases, the pronoun is called a resumptive pronoun (Ross 1967). What makes this phenomenon particularly interesting is the fact that the semantic meaning of the resumptive pronoun and the gap are identical (roughly speaking, a bound variable). The sentences in (6) provide an example from Hebrew, where a preposition (al ‘about’) cannot be stranded without a resumptive pronoun. (6)  a. * ha-​ iSa Se dibarnu al the-​ woman Op  we-​talked  about  b. ha-​ iSa Se dibarnu ale-​ the-​  woman  Op we-​talked about ‘The woman we talked about arrived.’ 

_​_​ _​_​ ha her  

higia. arrived higia. arrived (Hebrew, Sharvit 1999: 590)

In sign languages, too, there are structural constraints on extraction. Notably for us, Lillo-​Martin (1986) shows that the pronoun IX can be used resumptively in ASL:  the pronoun in (7a) rescues the grammaticality of (7b). Schlenker & Mathur (2013) present tentative evidence that pronouns may also behave resumptively to save cases of crossover generated by wh-​movement.      

(7) a.

top

[THAT COOKIE]a, IX-​1 HOPE SISTERb SUCCEED bPERSUADEc cMOTHER EAT IX-​a.      

top

b. * [THAT COOKIE]a, IX-​1 HOPE SISTERb SUCCEED bPERSUADEc cMOTHER EAT _​_​. ‘That cookiei, I hope my sister manages to persuade my mother to eat iti.’                         (ASL, Lillo-​Martin 1986: 423) Koulidobrova (2012) provides evidence that this might not be the whole story for ASL: in particular, for some ASL signers who report the contrast in (7), the sentence in (7b) also becomes grammatical if the noun phrase ‘THAT COOKIE’ is signed at a neutral location in space. What is relevant now for our generalizations about pronouns is the fact that, in those cases where extraction is prohibited, a resumptive pronoun can often rescue grammaticality. In sum, sign language pronouns show binding conditions, crossover effects, and resumptive effects.

21.2.2  Semantics Semantically, perhaps the most notable property of pronouns is that they can be bound: they need not always receive a fixed value, but can vary in the scope of another operator. In the English sentence in (8), the pronoun his does not pick out a single individual (either atomic or plural); instead, it varies in value with respect to individuals quantified over by the quantifier phrase every boy. This property of co-​variation with a higher operator is the hallmark of a bound reading. (8)

Every boy saw his mother.

In sign language, can pronouns be bound? Here, I report findings that show the answer to be ‘yes’: bound readings are attested robustly across the literature and across many sign 461

462

Jeremy Kuhn

languages, including ASL, LIS, French Sign Language (LSF), German Sign Language (DGS), and Russian Sign Language (RSL), to name a few. These results conclusively show that the semantic analysis of pronouns in sign language must be fundamentally the same as pronouns in spoken language. This is in contrast to purely referential analyses that have been proposed for pointing gestures that accompany spoken language (Giorgolo 2010). The empirical situation in sign language is somewhat more complicated; in particular, sign languages sometimes do not allow bound readings in environments where spoken languages do (Graf & Abner 2012; Koulidobrova & Lillo-​Martin 2016). Here, I leave the explanation for these differences largely open. Bound readings can be seen in a wide variety of structures. These include: variation under individual quantifiers like every and no, variation under temporal quantifiers like whenever, variation of focus alternatives under only, and sloppy readings under ellipsis. Kuhn (2016) confirms that pronouns can be bound under ALL in ASL, as in (9). (9)  [ALL BOY]a WANT [ALL GIRL]b THINK IX-​a LIKE IX-​b. ‘All the boys want all the girls to think they like them.’

(ASL, Kuhn 2016: 454)

Kuhn (2016) verifies with interpretation questions that the pronoun is truly receiving a bound reading, evidenced by co-​variation. In particular, (9) has a reading in which each boy wants each girl to think that he likes her (as distinct from a reading where the sum of the boys likes the sum of the girls). This replicates data from Graf & Abner (2012) showing that pronouns can be bound under ALL and EACH in ASL. ‘Donkey sentences’, as discussed in Schlenker (2011), provide an example where pronouns co-​vary in the scope of a temporal quantifier. In the LSF sentence in (10), the value of the pronouns IX-​a and IX-​b depends on which instance of collaboration is being considered (by the temporal quantifier EACH-​TIME).1 (10) 

EACH-​TIME LINGUISTa PSYCHOLOGISTb THE-​THREE-​a,b,1 TOGETHER WORK, IX-​a HAPPY BUT IX-​b HAPPY NOT.

‘Whenever I work with a linguist and a psychologist, the linguist is happy but the psychologist is not happy.’ (LSF, Schlenker 2011: 352) Schlenker (2011) reports these results for ASL and LSF; Kuhn (2016) replicates these patterns for ASL. Steinbach & Onea (2016) report analogous results for DGS, and Quer (2012) for Catalan Sign Language (LSC). In verb phrase ellipsis, the site of ellipsis must retrieve a predicate of type ⟨e, t⟩ from an overt VP in the context. When a pronoun appears in this overt VP, the meaning of the ellipsis site depends on whether the overt pronoun was bound or free, generating an ambiguity:  ‘strict’ readings arise from the ellipsis of a free pronoun; ‘sloppy’ readings arise from the ellipsis of a bound pronoun. Example (11) provides an example with two different LFs that could be retrieved. (11) Teresa saw her mother. Becky did _​_​, too. a.  Strict reading: ‘Becky saw Teresa’s mother.’ VP meaning: λx[x saw yTeresa’s mother]

462

463

Discourse anaphora

b.

Sloppy reading: ‘Becky saw Becky’s mother.’ VP meaning: λx[x saw x’s mother]

Note that on the sloppy reading, we essentially have co-​variation over a domain of two: Teresa and Becky. The presence of sloppy readings can therefore be used as another diagnostic for bound pronouns. Sloppy readings of pronouns have been widely reported in the sign language literature. Lillo-​Martin & Klima (1990), among others, report strict/​sloppy ambiguity for ASL. Analogous findings have been reported for many other sign languages, including LSF (Schlenker 2011), LIS (Cecchetto et  al. 2015), and LSC (Quer & Rosselló 2013). Examples are given for ASL and LIS in (12) and (13), respectively. (12) 

MARYa, ALICEb IX-​a THINK IX-​a HAVE MUMPS. IX-​b SAME.

a.  ‘Mary thinks she has mumps. Alice ⟨thinks Mary has mumps⟩, too.’ b. ‘Mary thinks she has mumps. Alice ⟨thinks Alice has mumps⟩, too.’                     (ASL, Lillo-​Martin & Klima 1990: 200) (13)

GIANNIa SECRETARY POSS-​a VALUE. PIERO SAME.

a. b.

‘Gianni values his secretary. Piero ⟨values Gianni’s secretary⟩, too.’ ‘Gianni values his secretary. Piero ⟨values Piero’s secretary⟩, too.’             (LIS, Cecchetto et al. 2015: 229)

Finally, under focus-​sensitive operators like only, pronouns that are co-​referent with an NP in focus may be bound or free, creating an ambiguity analogous to that of ellipsis constructions. For example, sentence (14) entails that Alice has a property that holds of no other individuals in context. On the bound reading, the pronoun her co-​varies with respect to these focus alternatives. (14) Only AliceF saw her mother. a. Free reading: ‘No other people saw Alice’s mother.’ b. Bound reading: ‘No other people saw their own mother.’ Kuhn (2016) reports that analogous ambiguities exist for several signers of ASL (cf. (15)). Schlenker (2014) reports similar results for both ASL and LSF. (15)

IX-​a JOHNa ONLY-​ONE SEE POSS-​a MOTHER.

a. b.

‘John saw his mother and no other people saw John’s mother.’ ‘John saw his mother and no other people saw their own mother.’                   (ASL, Kuhn 2016: 455)

Thus, as evidenced by examples with individual quantifiers, temporal quantifiers, ellipsis constructions, and focus alternatives, pronouns in sign language can be bound. If I have been somewhat pedantic in enumerating examples of bound readings in sign language, it is because there are a number of examples where bound readings are dispreferred or impossible in sign language where they are perfectly available in spoken language. Two such examples are mentioned here. First, Graf & Abner (2012) report that some ASL signers find it difficult for an overt pronoun to be bound under the quantifier NONE, reporting the data in (16). 463

464

Jeremy Kuhn

(16) *[NO POLITICS PERSON]a TELL-​STORY IX-​a WANT WIN. Intended: ‘No politician said that he wanted to win.’ (ASL, Graf & Abner 2012: 196) Kuhn (2016) reports a split in judgments on similar sentences, with some signers finding analogous constructions acceptable under the bound reading. Signers who find (16) ungrammatical can communicate a similar meaning using other syntactic constructions, such as null pronouns or role shift. See, for example, Quer (2005) for binding into role shift under NO-​ONE. Second, bound readings have been reported not to exist on pronouns that have not had an antecedent introduced at a specific locus. Koulidobrova & Lillo-​Martin (2016) report the following paradigm.2 (17) a. b.

BOY ALL THINK {IX-​a,c/​IX-​neutral} SMART.

‘All the boysi think theyj/​*i are smart.’ PETER THINK {IX-​a/​IX-​neutral} SMART, JOHNb SAME. ‘Peteri thinks hej/​*i is smart; Johnk does, too.’ = Peter and John think someone else is smart.               (ASL, Koulidobrova & Lillo-​Martin 2016: 226–​227)

It is still an open puzzle what exactly is going on in these cases, but the litany of examples above provides strong evidence that exceptions should be captured through constraints (perhaps presuppositions) on a system otherwise identical to spoken language. Finally, while I have tried to make the case that bound readings of pronouns exist across many sign languages, it is fully possible that exceptional languages exist. For instance, for Kata Kolok, a sign language used in a small village in the north of Bali, Indonesia, Perniss & Zeshan (2008) report that pronouns always point to the real-​world locations of their referents or to some object associated with their referent. No data is given about how signers of Kata Kolok express meanings generally communicated through bound readings, but it is nevertheless conceivable that Kata Kolok has a fundamentally different pronominal system from the spoken languages or sign languages reviewed above.

21.2.3  Summary: pronouns in sign language and spoken language In summary, systems of sign language pronouns, cross-​linguistically, fit into the same formal patterns that are well known and established for spoken language pronouns. Syntactically, they reflect Binding Theory conditions, show crossover effects, and can be used resumptively to rescue island violations. Semantically, they can be bound or free, giving rise to ambiguities like strict and sloppy readings under ellipsis. We conclude that pronouns in sign language and pronouns in spoken language are reflections of the same abstract pronominal system.

21.3  How is space encoded? At this point, we have established that pronouns in sign languages are fundamentally part of the same abstract system as pronouns in spoken language, allowing, in the base case, the same expressive possibilities (e.g., bound readings) and subject to the same kinds of structural constraints (e.g., Binding Theory). 464

465

Discourse anaphora

But, as has been widely noted in the literature, sign language pronouns are unique in that they can be disambiguated with the use of space, as we saw in example (1), repeated here as (18). (18) 

JOHNa TELL BILLb IX-​a WILL WIN.

a. =  ‘John told Bill that John will win.’ b.  ≠ ‘John told Bill that Bill will win.’

(ASL)

These uses of space display two properties in particular that make them unique. First, there are theoretically infinitely many possible loci; Lillo-​Martin & Klima (1990) emphasize this point, noting that even though psychological constraints prevent more than a few loci from being used in a particular discourse, for any two loci, a third locus can be established between them. Second, there is an arbitrary relationship between a given noun phrase and the locus where it is assigned. That is, in one discourse, a particular noun phrase might be assigned one locus; in another discourse, it might be assigned a different locus. Thus, the factors that determine locus placement are not intrinsic to the noun phrase in question; instead, they are determined by a collection of pressures, including the number of referents, the order in which they are mentioned, and phonological constraints. For more discussion of locus placement, see Geraci (2014), who argues that the default placement of loci in LIS reflects position in the syntactic hierarchy, Steinbach & Onea (2016), Nuhbalaoglu (2018), and Wienholz et al. (2018) (also see Perniss, Chapter 17). In spoken language, there seems to be no analogous phonetic marker with these properties that holds the same syntactic status in being able to disambiguate logical forms. For example, no spoken language can arbitrarily place pitch contours on a noun phrase as a unique designator that can be repeated later on a pronoun that refers to it. On the other hand, see Aronoff et al. (2005) for discussion of ‘alliterative agreement’ in Bainouk and Arapesh, which arguably reflects a theoretically unbounded feature set. Given the results discussed in Section 21.2, we have argued that sign language pronouns and spoken language pronouns should be analyzed within the same basic framework. How, then, do we encode the use of space into this framework? Two basic answers have been proposed for this question. The first principal line of analysis follows Lillo-​Martin & Klima (1990), who propose that loci are an overt phonological reflection of syntactic indices, or, in semantic terms, variable names. The second principal line of analysis (Neidle et al. 2000; Kuhn 2016; Steinbach & Onea 2016) posits that loci are a kind of syntactic feature –​ albeit one with the unusual properties described above. Here, following Kuhn (2016), I will argue that compelling parallels exist between loci in sign language and morphosyntactic features in spoken language, several of which cannot be captured in a purely variable-​based analysis. These include the following facts: 1. In appropriate contexts, multiple distinct noun phrases can be indexed at the same locus, just as multiple noun phrases in spoken language can bear the same feature. 2. Loci on pronouns may be uninterpreted in exactly the same contexts where morphosyntactic features are uninterpreted in spoken language –​ namely, in sites of ellipsis and under focus-​sensitive operators. 3. Loci induce changes on verbal morphology in a way parallel to feature agreement or clitic incorporation (ASL: Lillo-​Martin & Meier 2011, among others). 465

466

Jeremy Kuhn

4. Loci show patterns of underspecification similar to syncretisms familiar from spoken language (ASL: Kuhn 2016; DGS: Steinbach & Onea 2016). In this section, I focus primarily on the first two of these properties, which pose challenges for the variable-​based analysis.

21.3.1  Variables or features? Lillo-​Martin & Klima (1990) observe that there are a number of striking parallels between loci and formal variables: in both cases, they appear on a pronoun and its antecedent, there are unboundedly many of them, and they disambiguate pronouns under multiple levels of embedding. Inspired by this wealth of similarities, Lillo-​Martin & Klima (1990) propose that loci are an overt phonological reflection of variable names. On the other hand, a rich thread of semantic work argues that the logic underlying natural language does not make use of formal variables (e.g., Quine 1960; Szabolcsi 1987; Jacobson 1999). Grounding for this hypothesis arises from the fact that variables are not logically necessary for expressive purposes; for example, Jacobson (1999) presents a grammar of pronoun binding in natural language that makes no use of variables. There is thus a theoretical tension between theories of semantics that say that variables do not exist, and analyses of sign language that say that loci are them. From another point of view, the semantic equivalence of variable-​free and variable-​ful systems is a sword that cuts both ways: anything that is expressible without variables can also be expressed with variables. The question, then, is a syntactic one: which semantic theory is a better match for the compositional system that we see in natural language? This formulation in fact reflects the discussion of Lillo-​Martin & Klima (1990), who draw a distinction between the linguistic object –​ the locus –​ and the syntactic object –​ the index. The question about loci can thus be reformulated:  to what extent do these linguistic objects –​loci – seem to have the formal properties of variables? Kuhn (2016) approaches this problem by laying out a strong instantiation of a variable-​based hypothesis side-​by-​side with the hypothesis in which loci are analyzed as a morphosyntactic feature, akin to phi-​features in English (Neidle et al. 2000). The two hypotheses can be stated as follows: (19) The (strong) loci-​as-​variables hypothesis: There is a one-​to-​one correspondence between ASL loci and formal variables. (20) The loci-​as-​features hypothesis: Different loci correspond to different values of a morphosyntactic spatial feature. Kuhn (2016) isolates the following property that critically distinguishes the two hypotheses: two variables of the same name that are unbound in a particular constituent must receive the same interpretation; in contrast, two pronouns that are unbound in a particular constituent may bear the same feature yet receive different interpretations. This difference is exemplified by the examples in (21). In both sentences, the two pronouns are unbound in the bracketed constituent. In (21a), the two pronouns both bear the feature [+masc], but can receive distinct interpretations, yielding a meaning where the cat and the dog have different owners. On the other hand, in (21b), the two 466

467

Discourse anaphora

pronouns are both interpreted as the same variable; they must therefore pick out the same individual. (21) a.  John told Barry that [his[+masc] cat scratched his[+masc] dog]. b. John told Barry that [hisx cat scratched hisx dog]. These facts make predictions about loci in ASL. A  featural analysis predicts that two pronouns that are unbound in the same constituent can share the same locus yet receive different interpretations; a variable-​based analysis predicts that they cannot. Kuhn (2016) argues that it is possible to find cases where two pronouns are indexed at the same locus but nevertheless receive different interpretations, thus falsifying the strong loci-​as-​variables hypothesis. Two kinds of examples form the core of the argument. First, we consider cases where two referential NPs at the same locus serve as potential antecedents for later pronouns. The acceptability of such sentences seems to be dependent on a number of pragmatic factors but improves when context and world knowledge sufficiently disambiguate the sentence (so that space does not have to). The sentence in (22) is judged as acceptable (on a seven-​point scale, reliably at 6/​7); critically, the sentence entails that John tells Mary that he loves her (or, dispreferred by world knowledge, that she loves him). The two pronouns are co-​located but not co-​referential. (22) 

EVERY-​DAY, JOHNa TELL MARYa IX-​a LOVE IX-​a. BILLb NEVER TELL SUZYb IX-​b LOVE IX-​b.

‘Every day, Johni tells Maryj that hei loves herj. Billk never tells Suzyl that hek loves herl.’         (ASL, Kuhn 2016: 463) In a second class of examples, two pronouns appear at the locus of an NP modified by ONLY-​ONE. As discussed above, under focus-​sensitive operators like only, pronouns that are co-​referential with the focused NP may be bound or free. In sentences with two pronouns, then, four readings are logically possible; either pronoun can be bound and either can be free.3 Sentence (23) tests what happens in sign language; here, note that there is no question that there is only a single locus involved, since there is only one NP introducing locus b. Kuhn (2016) reports a context-​matching task that shows that this sentence is ambiguous in ASL, just as in English. To highlight one of the mixed readings, the context for the ‘free-​bound’ reading is provided in (24). (23) 

IX-​a JESSICA TOLD-​ME IX-​b BILLY ONLY-​ONE FINISH-​TELL POSS-​b MOTHER POSS-​b FAVORITE COLOR.

‘Jessica told me that only Billy told his mother his favorite color.’ Can be read as: bound-​bound, bound-​free, free-​bound, or free-​free.                       (ASL, Kuhn 2016: 467) (24)  Free-​bound: [Only Billyx] λy.y told x’s mother y’s favorite color. Context: Billy’s mother can be very embarrassing sometimes. When she has his friends over to play, she asks them all sorts of personal questions, which they are usually reluctant to answer. Yesterday, she asked them what their favorite color is, but only Billy answered. (ASL, Kuhn 2016: 467)

467

468

Jeremy Kuhn

Critically, on the two mixed readings, the two pronouns are co-​located but receive different interpretations. The strong loci-​as-​variables hypothesis thus undergenerates. On the other hand, the latter example in fact shows an interesting parallel with phi-​ features in spoken language. Specifically, phi-​features may be ‘uninterpreted’ when bound by focus-​sensitive operators like only. For example, the bound reading of (25) entails that no other individuals in some comparison set did their homework. What is interesting is that this comparison set is not restricted to individuals that match the phi-​features of the pronoun; for example, it can include John, who is not female. (25)  Only Mary did her[+fem] homework. Entails: John didn’t do his homework. This pattern extends to ASL loci: when a pronoun is bound under ONLY-​ONE (as in several readings of (23)), its interpretation in the comparison set may range over individuals who are indexed at other loci, such as Jessica in (23), indexed at locus a. Thus, the strong loci-​as-​variables hypothesis has been falsified. In contrast, loci share important formal properties with morphosyntactic features. At this point, there are essentially two directions that a theory can go. The first route is the more radical:  since ASL loci do not necessitate a variable-​based analysis, Kuhn (2016) provides a purely feature-​based analysis in a variable-​free, Directly Compositional framework. Alternatively, weaker forms of the variable-​ based hypothesis are available. Schlenker (2016), recognizing the problems presented here, presents one such weakening: an analysis in terms of ‘featural variables’, where variables, like features, may also be subject to erasure. We leave the choice between these directions open.

21.3.2  Spatial syncretisms Within feature-​ based analyses of loci, two explicit accounts have recently been proposed: Steinbach & Onea (2016) and Kuhn (2016). These two analyses target slightly different empirical domains, so differ accordingly in framework: the former (in Discourse Representation Theory) is designed to account for dynamic binding across sentences, so is not compositional; the latter (in Combinatory Categorial Grammar) is designed to account for quantificational binding within sentences, so is not dynamic. Nevertheless, the two accounts share a number of formal properties, which make similar predictions regarding cases of underspecification of locus features. Kuhn (2016) observes that these patterns parallel behavior of features that has been documented for spoken language. Steinbach & Onea (2016) and Kuhn (2016) observe that nouns, pronouns, and verbs may be underspecified for locus. For example, while the form of an agreeing verb determines the loci of its arguments, non-​agreeing verbs are underspecified in the sense that they are compatible with arguments at any locus (see Quer, Chapter 5, on agreement). Similarly, a null pronoun inherently cannot display spatial features, so is underspecified in that it can be associated with an antecedent at any locus. In DGS, Steinbach & Onea claim that overt pronouns, too, can be underspecified for locus; in (26), the neutral pronoun can pick out either of two discourse referents. (26)

MARIA IX-​a NEW TEACHER IX-​b LIKE. IX-​neutral SMART.

‘Maria likes the new teacher. {Maria /​The new teacher} is smart.’               (DGS, Steinbach & Onea 2016: 435–​436) 468

469

Discourse anaphora

Indeed, in DGS, Steinbach & Onea (2016) report that multiple levels of underspecification are possible: a pronoun that is directed generally to the right side of the signing space may pick out a discourse referent on the near right or the far right. Kuhn (2016) observes that these patterns of underspecification are analogous to cases of syncretisms in spoken language, where two morphological forms of a word are phonologically identical. In English, for example, the nominative and accusative forms of the second person pronoun display a syncretism (you/​you, compared to he/​him). In German, the noun Frauen (‘women’) is identical in the accusative and dative case, meaning that it can serve as an argument both for verbs that select accusative case (e.g., finden ‘to find’) and for those that select dative case (e.g., helfen ‘to help’). From this point of view, neutral pronouns in DGS (and perhaps ASL) display a syncretism, able to retrieve an antecedent at any locus. Johnson & Bayer (1995) observe that theories of syncretisms make specific predictions regarding the coordination of unlike categories:  namely, when constituents of two different categories are coordinated, the resulting constituent can only combine in the grammar with an argument that shows a syncretism with respect to the same categories. This prediction can be made concrete with a German example involving third person singular forms of the two verbs introduced above (i.e., findet/​hilft). When a verb that subcategorizes for an accusative object is coordinated with a verb that subcategorizes for a dative object, the resulting complex verb can only take arguments that are syncretic between accusative and dative case. The prediction is borne out: Frauen is grammatical in such cases (27c); Männer/​Männern, ‘man-​ACC/​DAT’ is not (27a,b). (27)  a. * Er findet und hilft Männer. b. * Er findet und hilft Männern. c. Er findet und hilft Frauen. ‘He finds and helps women.’       

(German, Johnson & Bayer 1995: 73)

In sign language, the frameworks used by Kuhn (2016) and Steinbach & Onea (2016) make an analogous prediction: when you coordinate a DP at locus a with a DP at locus b, the resulting complex DP can only bind pronouns that are syncretic between locus a and locus b. As it turns out, data of precisely this nature is discussed by Schlenker (2011), in the form of disjunctive antecedents. Specifically, when two DPs at distinct loci are coordinated with OR, the resulting discourse referent can be retrieved by a (syncretic) null pronoun (28b), but not by an overt pronoun that is itself specified for locus (29b). (28)  a.  b.

(29) a.

BLACK-​neutral OR ASIA-​neutral WILL WIN NEXT PRESIDENT ELECTION.

Ø WILL WIN AHEAD. BLACK-​a OR ASIA-​b WILL WIN NEXT PRESIDENT ELECTION. Ø WILL WIN AHEAD. ‘An African-​American or an Asian-​American will win the next presidential election. He will win by a large margin.’   (ASL, Schlenker 2011: 386) BLACK-​neutral OR ASIA-​neutral WILL WIN NEXT PRESIDENT ELECTION. IX-​neutral WILL WIN AHEAD.

b.

??BLACK-​a OR ASIA-​b WILL WIN NEXT PRESIDENT ELECTION. IX-​{a/​b} WILL WIN AHEAD. 469

470

Jeremy Kuhn

‘An African-​American or an Asian-​American will win the next presidential election. He will win by a large margin.’   (ASL, Schlenker 2011: 386) More empirical work is needed to evaluate the extent of this parallel. We also note that alternative explanations may be possible for the sign language data (see, e.g., Schlenker’s (2011) variable-​based analysis of the paradigm above). Nevertheless, given the freedom with which loci can be assigned, the directions discussed here provide a potentially very rich domain with which to investigate theories of underspecification and syncretisms that have been built based on spoken language.

21.3.3  Pictorial loci Another theoretical tension introduced by sign language regards the interaction of the combinatorial grammar with iconic, pictorial representations. As emphasized in Section 21.2, the patterns that we see in sign language (in pronouns as elsewhere in the grammar) fit closely with discrete and categorial patterns familiar from spoken language. But sign language is also well known for its ability to express meaning in a demonstrative, picture-​like way. For example, a zig-​zagging motion of a hand can describe the zig-​zagging motion of a vehicle, and a small circle with the fingers can describe a disk of the same size (see work on ‘classifier’ constructions, as in Emmorey (2003); also see Tang et  al., Chapter  7). Work by Cuxac (2001) and Liddell (2003) emphasizes that these patterns have a systematicity to them yet cannot be analyzed with the standard tools for language. Schlenker et al. (2013) address this tension in the domain of pronouns. Looking at the geometric properties of singular and plural pronouns in ASL and LSF, they confirm that the form-​to-​meaning mapping contains an iconic component. However, they show that this can be reconciled without a hitch with the formal grammar: the iconic mapping defines a predicate –​a set of objects –​that then interacts in the grammar like as normal. Zucchi (2011) and Davidson (2015) reach a similar conclusion for the case of classifier constructions (i.e., category-​specific pronominal forms in constructions that iconically express orientation and movement), showing that they can be captured by allowing a verb to take a ‘demonstration’ as an argument –​ that is, a set of pictorially described events (see Tang et al., Chapter 7, for details). This can be illustrated in somewhat more detail with the specific case of locus height. For ASL and LSF, Schlenker et al. (2013) establish that the height of a locus can be used to indicate the height of the value of the pronoun. For example, high loci are used for tall individuals, low loci for short individuals. Yet, this is not simply a matter of a [±tall] feature on a pronoun: Schlenker et al. (2013) show that the height of the pronoun is also sensitive to whatever the orientation of referent happens to be. For example, the locus height for the same individual standing up, lying down, or hanging from a branch is different, depending on where the upper half of their body is located. At the same time, however, these pronouns still obey the formal patterns described in Section 21.2; for example, Schlenker et  al. (2013) demonstrate that pronouns with iconic height inferences still show sensitivity to binding conditions. The empirical situation thus calls for a way to incorporate iconicity and formal grammar into a single system. Schlenker et al. (2013) thus define a rough iconic mapping inspired by geometric projection (see Greenberg (2013) for a more precise formulation), which returns the set of all individuals whose torso is in the indicated position, relative to some viewpoint. Based 470

471

Discourse anaphora

on the projective properties of these iconic meanings, Schlenker et al. (2013) incorporate this iconically defined predicate as a presupposition on the denotation of the pronoun. The pictorial information is thus ‘packaged’ in a way that allows it to get passed along through the system as usual. Of relevance to the discussion in Section 21.3.1, Schlenker (2014) further observes that these height/​orientation inferences in some respects behave analogously to grammatical phi-​features in spoken language. In particular, like gender features, person features, and (as seen above) choice of locus, Schlenker (2014) shows that height/​orientation inferences are left uninterpreted under ellipsis and focus-​sensitive operators. Schlenker ultimately rules that the LSF judgments are not clear enough to definitively dissociate these effects from the behavior of not-​at-​issue (e.g., presupposed) material in general (as opposed to specifically the behavior of features). Nevertheless, a unified picture begins to emerge where loci  –​both in their iconic and their grammatical uses  –​are incorporated as a presupposed or featural component on a pronoun.

21.4  Dynamic semantics 21.4.1  Background on dynamic semantics Perhaps one of the largest theoretical shifts in semantic theory has been the shift from traditional, static semantics to theories of dynamic semantics. On traditional, static views of meaning, sentences denote sets of worlds or situations: essentially, those in which the sentence is true. Sentences in discourse are interpreted conjunctively and restrict the set of worlds that are under discussion, as in (30). (30)  a.  It is raining. Richard laughed. b. raining Ù laughed(richard) However, a static conception of meaning faces challenges in light of more complex cross-​ sentential relations, such as discourse anaphora. The puzzle can be illustrated with the sentences in (31); here, the pronoun in the second sentence is most easily interpreted as referring to whichever man entered. Intuitively, we need to provide a meaning like the one in (32a), where the existential is able to scope over both sentences. The situation gets even more hairy with pronouns that occur several sentences away from their antecedent; somehow, the existential must be given unbounded scope. This is at odds with a standard static semantic theory, which locks in quantifier scope at a sentential level, with a logical form that generates the meaning in (32b). Note that on this meaning, there is no logical connection between the bound variable and the free variable. (31)  Someone entered. He laughed. (32) a. ∃x[entered(x) Ù laughed(x)] b. ∃x[entered(x)] Ù laughed(x)  =  ∃x[entered(x)] Ù laughed(y)

Dynamic semantics (Kamp 1981; Heim 1982; Groenendijk & Stokhof 1991; Dekker 1993; Muskens 1996; among others) reconceptualizes the meaning of a sentence as a ‘context-​change potential’, that is, a function which changes the context in some way. 471

472

Jeremy Kuhn

The output context of one sentence becomes the input context for the following sentence. This yields a more powerful semantic system, allowing sentences to do more than just restricting what worlds we are talking about; in addition, it becomes possible for a sentence to add new discourse referents into a context. Specifically, a sentence is evaluated with respect to an assignment function –​essentially, a list of all the individuals in the discourse context. Indefinites and proper names (e.g., a man, John) are interpreted dynamically: their semantic contribution is to add a new value to the list. The updated list serves as the input for the next sentence in the discourse. The discourse in (33) illustrates how this allows the set of discourse referents to increase. We will assume a neutral context; this is represented by the starting state of a singleton set containing an empty list. Sentence (33a) contains the indefinite a woman, which assigns a value to one variable in the assignment function; the rest of the content of the sentence restricts what the value of this variable can be. The output of (33a) is the set of all assignment functions in which the first variable is assigned to some woman who entered. The following sentence has no dynamic elements in it; thus, the sentence itself is static, and the only contribution is to again restrict the possible values of the variable already assigned; the output of (33b) is thus a subset of the input of (33b). Finally, (33c) includes a proper name, which again introduces a variable whose value is restricted to the named individual; the other content in the sentence again adds restrictions to the possible values of the two variables. (33)

{[]} a. A woman walked into the office. {[x] | woman(x) Ù enter(x)} b.  She was worried. {[x] | woman(x) Ù enter(x) Ù worried(x)} c. She was looking for Ed. {[x,y] | woman(x) Ù enter(x) Ù worried(x) Ù y = ed Ù search(y)(x)}

On a dynamic view, the meaning of a dynamically-​bound pronoun is the same as the meaning of a pronoun that is quantificationally bound within a sentence. For example, in Groenendijk & Stokhof ’s (1991) Dynamic Predicate Logic, the meaning of a pronoun (in both cases) is the value of a variable; dynamic binding occurs when the value of this variable has been assigned in the context in which the pronoun is evaluated (see below for variable-​free treatments of dynamic semantics). Note that the fact that each sentence is evaluated in the context of the previous sentence means that pronouns in a given sentence can only refer to the individuals that have been introduced by previous sentences. One particularly influential case for the empirical domain encompassed by dynamic semantics is that of so-​called ‘donkey sentences’, as exemplified in (34). (34)  If a farmer beats a donkey, it kicks him back. On standard assumptions, the indefinites in (34) are not in a position where they can syntactically bind their pronouns (though, see Barker & Shan (2008) for an alternative); thus, the pronouns in (34) must receive their interpretation through the same general mechanism that gives pronouns their interpretation in cross-​sentential cases like (31). But in a conditional sentence like (34), co-​variation in the donkey-​farmer pairs is visible in 472

473

Discourse anaphora

the truth conditions of the sentence; the dynamic interpretation of the conditional must therefore allow quantification over assignment functions. One additional note is relevant to mention:  the insights of dynamic semantics are perfectly compatible with variable-​free theories of semantics, a point made by Szabolcsi (2003). In particular, although variable names provide a convenient way to refer to the slots in the lists that are dynamically passed through the composition of discourse, these are not fundamental to the dynamic architecture. Szabolcsi (2003) provides a semantics that is variable-​free yet represents sentences as context-​change potentials and analyzes cross-​sentential anaphora via binding, as in dynamic semantics. This fact will be relevant to the interpretation of Schlenker’s (2011) loci-​based arguments in favor of dynamic semantics in light of Kuhn’s (2016) arguments against a variable-​based view of loci.

21.4.2  E-​type theories of cross-​sentential anaphora What cross-​sentential binding and donkey sentences show is that some enrichment to the semantics is needed to allow a pronoun to co-​vary with an indefinite that is not in a position to scope over it, but the precise nature of this enrichment has been a matter of debate. Under dynamic theories, as we have seen, words and sentences are able to introduce individual variables into the context that get passed along through the discourse. All pronouns, whether locally or dynamically bound, are individual-​type variables. In E-​type theories of anaphora (Evans 1980; Elbourne 2005), the semantics is enriched not by assignment functions that pass individual variables through the discourse, but by situations –​ minimal information states with information about the world. For example, the first sentence in (31) would denote the set of minimal situations in which a single man entered. Under E-​type theories, cross-​sentential pronouns and donkey pronouns are not variables. Instead, they are analyzed as definite descriptions, so the Logical Form of he in the second sentence of (31) is the definite description the man. Critically, the value of this definite description must come from some formal link to the previous discourse. Elbourne (2005) takes this to be a case of syntactic ellipsis: a pronoun is a definite description with an ellipted NP retrieved from a syntactic antecedent. The detailed range of phenomena in spoken language has caused the E-​type analysis to converge with the dynamic analysis in many respects. For example, as for dynamic semantics, donkey sentences again necessitate quantification –​ in this case, over situations. In fact, Dekker (2004) argues that when the E-​type analysis becomes sufficiently fine-​grained to deal with the range of data, it may even become isomorphic to dynamic semantics. The critical examples are cases of donkey sentences that contain two completely symmetric indefinites, as in (35). (35)  When a bishop meets a bishop, he blesses him. What is important here is that the minimal situation described by the antecedent does not introduce a unique individual that can be retrieved by the pronoun. The bishop is not well defined, because there are two of them; indeed, even the longer definite description the bishop that meets a bishop is not well defined, as the verb describes a symmetric relation. Elbourne’s (2005) answer to this puzzle is to posit that meet is in fact not a symmetric relation as far as situations are concerned: the situation in which A meets B is distinct from the situation in which B meets A. Dekker (2004) claims that retrieving individuals 473

474

Jeremy Kuhn

from such fine-​grained situations becomes isomorphic to retrieving the values of variables from an assignment function, thus converging with dynamic semantics.

21.4.3  Sign language contributions Schlenker (2011) argues that sign languages (specifically, ASL and LSF) provide the final straw of evidence in favor of dynamic theories, to the extent that the two theories are not notational variants. As discussed above, empirical data has forced the E-​type theory to essentially replicate formal aspects of a dynamic theory, to the point where the E-​type theory threatens to become a notational variant of a dynamic system (Dekker 2004). Schlenker, however, observes that one critical difference still distinguishes the two theories: namely, the formal link between a pronoun and its antecedent. In dynamic semantics, this link arises semantically, via binding (on a variable-​based system, through the co-​indexation of a pronoun with its antecedent); on an E-​type theory, the link arises syntactically, via NP ellipsis. Schlenker observes that in sign language this link is made overt; as we have seen, a pronoun must point towards the locus of its antecedent. The question then is: when you point to a pronoun in cases of cross-​sentential anaphora or donkey anaphora, are you retrieving a semantic variable, or are you retrieving syntactic material? As discussed in Section 21.3, one unique feature about loci is the arbitrary connection between an NP and its locus, so different occurrences of the same NP (e.g., BISHOP) can be indexed at two different loci. Schlenker makes use of this arbitrarity to dissociate the syntactic material (the NP) from the semantic denotation (the variable, essentially). Specifically, if there are two identical NPs in a sentence (as in the bishop-​sentences above), these NPs can nevertheless be placed at two distinct loci. One such example from Schlenker (2011) is given in (36). (36)

WHEN SOMEONEa AND SOMEONEb LIVE TOGETHER,

… a. IX-​a LOVE IX-​b. b. IX-​b LOVE IX-​a. c.  # IX-​a LOVE IX-​a. d.   # IX-​b LOVE IX-​b. ‘When someone and someone live together, one loves the other.’             (ASL, Schlenker 2011: 357)

Recall that the link between a pronoun and its antecedent on an E-​type theory is a matter of syntactic ellipsis. In an E-​type theory for sign language, then, the only contribution of pointing to a locus is to identify the NP material that should be retrieved for the definite description. Counterintuitively, this predicts that pointing to the locus where one individual was indexed can retrieve an individual who was indexed at a different locus as long as the two were described symmetrically. An equally counterintuitive corollary is the prediction that pointing to the first locus twice in these cases should not result in a Condition A violation, since the semantics is able to provide an interpretation where the two pronouns receive different meanings. The example in (36) shows that this prediction is not borne out, since the sentences in (36c) and (36d) are infelicitous, showing the existence of a Condition A violation. The E-​type theory is thus falsified.

474

475

Discourse anaphora

One of the things that this debate brings out is the idea that semantic objects which intuitively feel quite different –​ situations vs. assignment functions –​ can nevertheless have very similar formal properties. As both situation/​event semantics and dynamic semantics are enriched to encompass to new empirical domains (such as plurals), it is an open question whether these formalisms will ultimately be isomorphic, or whether they can be teased apart.

21.4.4  Dynamic semantics of plurals In the last 20  years, the framework of dynamic semantics has been enriched to allow the semantic system to represent and manipulate functional relationships between plural discourse referents (van den Berg 1996; Nouwen 2003; Brasoveanu 2013). Motivating examples include sentences like (37), where the final pronoun retrieves a functional antecedent that is constructed by the interaction of the distributive operator and the indefinite in the preceding sentence. (37)  Three boys each saw a girl. They each waved to her. Notably, the interpretation of the final pronoun depends on a correspondence that was introduced by the first sentence. The sentences entail that whichever girl was seen by each boy, that is the girl that he waved to. In order to represent this meaning, the semantic system must represent not only lists of individuals but sets of lists of individuals. Adapting the conventions used earlier, evaluating the two sentences in (37) would thus yield the output state shown in (38). [x1, y1] (38)  {  [x2, y2]  | ∀i, boy(xi) Ù girl(yi) Ù saw(yi)(xi) Ù waved(yi)(xi)} [x3, y3] A growing body of recent work has shown that this kind of constructed, functional reference appears in a wide variety of phenomena beyond pronouns:  Henderson (2014) discusses the role of functions in dependent indefinites (where an indefinite is inflected to indicate that it varies with respect to another argument); Bumford (2015) discusses the role of functions in the ‘internal’ readings of adjectives like same and different. Without getting into the details of these accounts, observe that the only way to state the truth conditions for the sentence in (39) is by making reference to the boy-​book correspondences. This means that a paraphrase in terms of functions, as below, is a very natural way to state the truth conditions. (39)  Every boy read the same books. ‘The function from boys to the books they read is a constant function.’ Strikingly, what is conceptually unified within these theories has been shown to be visibly unified in sign language. Specifically, for ASL, Kuhn (2017) shows that dependent indefinites and the adjectives SAME and DIFFERENT all employ spatial co-​location to specify dependency relations. For example, in (40), the indefinite ONE or the adjective SAME moves in an arc-​movement over the same area of space that was established by the plural BOY. This inflection has a semantic effect: (40a) entails that a plurality of books 475

476

Jeremy Kuhn

is distributed over the boys, one each; (40b) only allows an ‘internal’ reading where the ‘sameness’ is distributed over the boys. Kimmelman (2015) presents analogous data for RSL, looking at distributive marking on numerals, nouns, and verbs. (40)  a.  b.

BOYS IX-​arc-​a READ ONE-​arc-​a BOOK.

‘The boys read one book each.’ ALL-​a BOY READ SAME-​arc-​a BOOK. ‘All the boys read the same book as each other.’      (ASL, Kuhn 2017: 417)

Notably, through the use of spatial agreement, sign language is able to disambiguate readings where spoken language cannot. In particular, dependent indefinites in spoken language (e.g., in Hungarian) are ambiguous when there are multiple potential licensors (cf. (41)); in ASL, they are not (cf. 42)). (41)  A fiúk két-​két könyvet  adtak a lányoknak. the  boys  two-​two  book give.3PL  the  girls ‘The boys gave the girls two books {per boy OR per girl}.’                   (Hungarian, Kuhn 2017: 411) (42)

ALL-​a BOY GAVE ALL-​b GIRL ONE-​arc-​b BOOK.

‘All the boys gave all the girls one book per girl.’

(ASL, Kuhn 2017: 412)

These data show that the semantic representation of dependent indefinites in ASL must be rich enough to represent the connection between the dependent indefinite and its licensor. Kuhn (2017) argues that the recent dynamic treatments of plurals provide exactly the tools that are needed for this end.

21.5  Conclusion In this chapter, I  looked at the case study of pronominal reference in sign language. Grounded in the robust finding that sign language pronouns and spoken language pronouns are part of the same system, we turned to a series of semantic debates where the unique properties of sign language offered to yield new insights. First, we examined the degree to which loci reflect the properties of formal variables. There are a number of compelling parallels –​ e.g., the unbounded number of them and the arbitrary choice of locus –​but we observed other respects in which the constraints of variables are too strict to generate the patterns of ASL. This led us to a feature-​based view of loci. Examining cases of underspecification, we saw one area in which this featural analysis may be leveraged to give new insights. Turning to cases of iconicity, we reviewed analyses that successfully incorporated iconic meaning into a combinatorial grammar. When iconic meaning appeared on pronouns, we saw that it exhibited several of the properties of grammatical features, thus dovetailing with the results on non-​iconic uses of loci. We then turned to cross-​sentential cases of anaphora, where dynamic semantics was pitted against an E-​type, situation-​semantics view. We showed that the sign language data provides evidence that a theory like dynamic semantics is necessary in order to capture the full range of data. Finally, looking at a wide range of examples involving distributive marking, we argued that sign language provides support for recent enrichments 476

477

Discourse anaphora

of dynamic semantics in which the semantic system represents functional relationships between plural discourse referents.

Notes 1 These examples also play an important role in the theory of dynamic semantics; we will return to these arguments in Section 21.4. 2 What is glossed as IX-​neutral is described by Neidle et  al. (2000:  94) as follows:  “the sign is articulated in a kind of neutral position, close to the signer’s body, and this spatial location does not carry referential information”. 3 There is a small quirk to this pattern, commonly known as Dahl’s puzzle: when one pronoun c-​ commands the other, one of the two mixed readings becomes unavailable (Dahl 1974).

References Aronoff, Mark, Irit Meir, & Wendy Sandler. 2005. The paradox of sign language morphology. Language 81(2). 301–​334. Barberà, Gemma & Josep Quer. 2018. Nominal referential values of semantic classifiers and role shift in signed narratives. In Annika Hübl & Markus Steinbach (eds.), Linguistic foundations of narration in spoken and sign languages, 251–​274. Amsterdam: John Benjamins. Barker, Chris & Chung-​chieh Shan. 2008. Donkey anaphora is in-​scope binding. Semantics and Pragmatics 1. 1–​46. Brasoveanu, Adrian. 2013. Modified numerals as post-​suppositions. Journal of Semantics 30(2). 155–​209. Bumford, Dylan. 2015. Incremental quantification and the dynamics of pair-​list phenomena. Semantics and Pragmatics 8(9). 1–​70. Cecchetto, Carlo, Alessandra Cecchetto, Carlo Geraci, Mirko Santoro, & Sandro Zucchi. 2015. The syntax of predicate ellipsis in Italian Sign Language (LIS). Lingua 166. 214–​235. Chomsky, Noam. 1977. On wh-​movement. In Peter Culicover, Tom Wasow, & Adrian Akmajian (eds.), Formal syntax, 71–​132. New York: Academic Press. Chomsky, Noam. 1981. Lectures on government and binding. Dordrecht: Foris. Cuxac, Christian. 2001. French Sign Language: Proposition of a structural explanation by iconicity. In Annelies Braffort, Rachid Gherbi, Sylvie Gibet, Daniel Teil, & James Richardson (eds.), Gesture-​based communication in human–​computer interaction (Lecture Notes in Computer Science), 165–​184. Heidelberg: Springer. Dahl, Osten. 1974. How to open a sentence: Abstraction in natural language (Logic Grammar Reports 12). Göteborg: University of Göteborg. Davidson, Kathryn. 2015. Quotation, demonstration, and iconicity. Linguistics and Philosophy 38. 477–​520. Dekker, Paul. 1993. Transsentential meditations:  Ups and downs in dynamic semantics. Amsterdam: University of Amsterdam PhD dissertation. Dekker, Paul. 2004. Cases, adverbs, situations and events. In Hans Kamp & Barbara Partee (eds.), Context-​dependence in the analysis of linguistic meaning, 383–​404. Amsterdam: Elsevier. Elbourne, Paul. 2005. Situations and individuals. Cambridge, MA: MIT Press. Emmorey, Karen (ed.). 2003. Perspectives on classifier constructions in signed languages. Mahwah, NJ: Lawrence Erlbaum. Evans, Gareth. 1980. Pronouns. Linguistic Inquiry 11(2). 337–​362. Giorgolo, Gianluca. 2010. Space and time in our hands. Utrecht:  Utrecht University PhD dissertation. Graf, Thomas & Natasha Abner. 2012. Is syntactic binding rational? In Proceedings of the 11th International Workshop on Tree Adjoining Grammars and Related Formalisms, 189–​197. Paris: ACL. Greenberg, Gabriel. 2013. Beyond resemblance. Philosophical Review 122(2). 215–​287. Groenendijk, Jeroen & Martin Stokhof. 1991. Dynamic predicate logic. Linguistics and Philosophy 14(1). 39–​100.

477

478

Jeremy Kuhn Heim, Irene. 1982. The semantics of definite and indefinite noun phrases. Amherst, MA: University of Massachusetts PhD dissertation (available at: semanticsarchive.net/​Archive/​jA2YTJmN). Henderson, Robert. 2014. Dependent indefinites and their post-​ suppositions. Semantics and Pragmatics 7(6). 1–​58. Jacobson, Pauline. 1999. Towards a variable free semantics. Linguistics and Philosophy 22(2). 117–​184. Johnson, Mark & Sam Bayer. 1995. Features and agreement in Lambek categorial grammar. Proceedings of the Formal Grammar Workshop, 123–​137. Barcelona. Kamp, Hans. 1981. A theory of truth and semantic representation. In Jeroen Groenendijk, Theo Janssen, & Martin Stokhof (eds.), Formal methods in the study of language, 277–​322. Amsterdam: Mathematisch Centrum. Kimmelman, Vadim. 2015. Distributive quantification in Russian Sign Language. Paper presented at Formal and Experimental Advances in Sign Language Theory (FEAST 4), Barcelona, Spain. Koulidobrova, Elena. 2009. self: Intensifier and ‘long distance’ effects in ASL. Paper presented at the 21st European Summer School in Logic, Language, and Information. Bordeaux. Koulidobrova, Helen. 2012. When the quiet surfaces: ‘transfer’ of argument omission in the speech of ASL-​English bilinguals. Storrs, CT: University of Connecticut PhD dissertation. Koulidobrova, Helen & Diane Lillo-​Martin. 2016. A ‘point’ of inquiry: The case of the (non-​)pronominal IX in ASL. In Patrick Grosz & Pritty Patel-​Grosz (eds.), Impact of pronominal form on interpretation, 221–​250. Berlin: Mouton de Gruyter. Kuhn, Jeremy. 2016. ASL loci: Variables or features? Journal of Semantics 33(3). 449–​491. Kuhn, Jeremy. 2017. Dependent indefinites:  The view from sign language. Journal of Semantics 34(3). 449–​491. Liddell, Scott. 2003. Grammar, gesture, and meaning in American Sign Language. Cambridge: Cambridge University Press. Lillo-​Martin, Diane. 1986. Two kinds of null arguments in American Sign Language. Natural Language and Linguistic Theory 4. 415–​444. Lillo-​Martin, Diane. 1991. Universal grammar and American Sign Language. Setting the null argument parameters. Dordrecht: Kluwer. Lillo-​Martin, Diane & Edward Klima. 1990. Pointing out differences: ASL pronouns in syntactic theory. In Susan D. Fischer & Patricia Siple (eds.), Theoretical issues in sign language research. Vol.1: Linguistics, 191–​210. Chicago: University of Chicago Press. Lillo-​ Martin, Diane & Richard Meier. 2011. On the linguistic status of ‘agreement’ in sign languages. Theoretical Linguistics 37(3/​4). 95–​141. Muskens, Reinhard. 1996. Combining Montague semantics and discourse representation. Linguistics and Philosophy 19(2). 143–​186. Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, & Robert G. Lee. 2000. The syntax of American Sign Language: Functional categories and hierarchical structure. Cambridge, MA: MIT Press. Nouwen, Rick. 2003. Plural pronominal anaphora in context: Dynamic aspects of quantification. Utrecht: Utrecht University PhD dissertation. Nuhbalaoglu, Derya. 2018. Comprehension and production of referential expressions in German Sign Language and Turkish Sign Language: An empirical approach. Göttingen: Georg-​August-​ Universität Göttingen PhD dissertation. Perniss, Pamela & Ulrike Zeshan. 2008. Possessive and existential constructions in Kata Kolok. In Ulrike Zeshan & Pamela Perniss (eds.), Possessive and existential constructions in sign languages, 125–​150. Nijmegen: Ishara Press. Postal, Paul. 1971. Cross-​over phenomena. New York: Holt, Rinehart & Winston. Quer, Josep. 2005. Context shift and indexical variables in sign languages. Semantics and Linguistic Theory (SALT) 15. 152–​168. Quer, Josep. 2012. Quantificational strategies across language modalities. In Maria Aloni (ed.), Proceedings of the 18th Amsterdam Colloquium, 82–​91. Berlin and Heidelberg: Springer-​Verlag. Quer, Josep & Joana Rosselló. 2013. On sloppy readings, ellipsis and pronouns: Missing arguments in Catalan Sign Language (LSC) and other argument-​drop languages. In Victoria Camacho-​ Taboada, Ángel Jiménez-​Fernández, Javier Martín-​González, & Mariano Reyes-​Tejedor (eds.), Information structure and agreement, 337–​370. Amsterdam: John Benjamins.

478

479

Discourse anaphora Quine, Willard. 1960. Variables explained away. Proceedings of the American Philosophical Society 104(3). 343–​347. Ross, John R. 1967. Constraints on variables in syntax. Cambridge, MA: MIT PhD dissertation. Sandler, Wendy & Diane Lillo-​ Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press. Santoro, Mirko & Carlo Geraci. 2013. Weak crossover in LIS. Paper presented at Theoretical Issues in Sign Language Research (TISLR 11), London. Schembri, Adam, Kearsy Cormier, & Jordan Fenlon. 2018. Indicating verbs as typologically unique constructions: Reconsidering verb ‘agreement’ in sign languages. Glossa 3(1): 89. 1–​40. Schlenker, Philippe. 2011. Donkey anaphora:  The view from sign language (ASL and LSF). Linguistics and Philosophy 34(4). 341–​395. Schlenker, Philippe. 2014. Iconic features. Natural Language Semantics 22(4). 299–​356. Schlenker, Philippe. 2016. Featural variables. Natural Language and Linguistic Theory 34. 1067–​1088. Schlenker, Philippe, Jon Lamberton, & Mirko Santoro. 2013. Iconic variables. Linguistics and Philosophy 36(2). 91–​149. Schlenker, Philippe & Gaurav Mathur. 2013. A strong crossover effect in ASL. Snippets 27. 16–​18. Sharvit, Yael. 1999. Resumptive pronouns in relative clauses. Natural Language and Linguistic Theory 17(3). 587–​612. Steinbach, Markus & Edgar Onea. 2016. A DRT analysis of discourse referents and anaphora resolution in sign language. Journal of Semantics 33(3). 409–​448. Szabolcsi, Anna. 1987. Bound variables in syntax: Are there any? In Jeroen Groenendijk, Frank Veltman, & Martin Stokhof (eds.), Proceedings of the 6th Amsterdam Colloquium, 331–​353. Amsterdam: ITLI (UvA). Szabolcsi, Anna. 2003. Binding on the fly:  Cross-​sentential anaphora in variable-​free semantics. In Geert-​Jan M. Kruijff & Richard T. Oehrle (eds.), Resource-​sensitivity, binding and anaphora, 215–​227. Dordrecht: Kluwer Academic Publishers. Van den Berg, Martin. 1996. Some aspects of the internal structure of discourse: The dynamics of nominal anaphora. Amsterdam: Universiteit van Amsterdam PhD dissertation. Wienholz, Anne, Derya Nuhbalaoglu, Nivedita Mani, Annika Herrmann, Edgar Onea, & Markus Steinbach. 2018. Pointing to the right side? An ERP study on anaphora resolution in German Sign Language. PLoS ONE 13(9). Zucchi, Sandro. 2011. Event descriptions and classifier predicates in sign languages. Presentation at Formal and Experimental Advances in Sign Language Theory (FEAST 1), Venice.

479

480

22 DISCOURSE PARTICLES Theoretical perspectives Elisabeth Volk & Annika Herrmann

22.1  Introduction In a narrow sense, discourse particles are often understood to be equivalent to modal particles. Modal particles may express different degrees of probability of, attitudes towards, and expectations about the propositional content of an utterance as well as updates to the common ground. In this sense, they do not describe particular states of affairs, but indicate how mutually accepted, controversial, or certain a proposition is to be taken. As such, discourse particles link the propositional content of an utterance to the knowledge and belief systems of the discourse participants (Zimmermann 2011). In line with Stede & Schmitz (2000), we take a broader perspective on discourse particles and understand them as a heterogeneous class of linguistic and gestural elements that serve discourse-​structural and expressive functions. Accordingly, discourse particles may steer the flow of conversations and facilitate the interpretation of utterances in discourse. Most importantly, they do not change the truth-​conditional meaning of utterances that contain them, but rather add pragmatic meaning. Moreover, discourse particles are typically not inflected, bear no grammatical relationship to other elements of the sentence, and may be phonologically ill-​formed (Schiffrin 1987). In sign languages, discourse particles may be expressed by manual and/​or non-​manual strategies as well as the use of the signing space (see Perniss, Chapter  17). Based on research on discourse particles in various sign languages, we propose a list of their functions in signed discourse as presented in Figure 22.1.

480

481

Discourse particles

Figure 22.1  Functions of discourse particles in sign languages

Accordingly, we assume that functions of discourse particles in sign languages can be assigned to three major discourse categories: discourse regulation, coherence, and modal meaning. In the following three sections, we will elaborate on each of these discourse categories in turn and discuss the associated forms and functions of discourse particles in sign languages.

22.2  Discourse regulation The category of discourse regulation relates to interactional aspects of signed conversations and therefore includes those discourse particles that steer the flow of dialogues and establish smooth transitions between turns of conversation partners. Across sign languages, it is common for signers to tap on the interlocutor’s arm, wave a hand, knock on the table, or turn the light on and off in order to start a conversation (Baker 1977; Baker & van den Bogaerde 2012, 2016). While such tactile and visual signals are very explicit ways to draw attention to oneself, signers may also use more implicit turn-​taking signals to make communication more effective. Among turn-​taking signals in sign languages are non-​manual markers such as head tilts and changes in eye gaze, as well as specific manual markers (Baker 1977; Baker & van den Bogaerde 2012, 2016).1 Moreover, prosodic signals such as changes in the speed of signing and turn-​final holds are reported to influence turn-​taking behavior (Groeber & Pochon-​Berger 2014).2 In the following, we consider manual turn-​taking signals as discourse particles and only refer to non-​manuals

481

482

Elisabeth Volk & Annika Herrmann

and prosodic signals in case they occur simultaneously with manual markers. In addition to turn-​taking signals, we will also discuss the inventory and interpretation of response particles in this section. Turn-​taking signals are used when a signer wishes to open a turn, end a turn, or hold the conversational floor. Furthermore, interlocutors can use backchannels to give feedback while indicating that they accept a signer’s right to the floor (Baker-​Shenk & Cokely 1980). One manual marker that may fulfill all of these functions and has been identified for various sign languages is palm-​up. Palm-​up has been described as a turn-​taking signal for American Sign Language (ASL; Baker-​Shenk & Cokely 1980; Roush 2007), Danish Sign Language (DSL; Engberg-​Pedersen 2002), Flemish Sign Language (VGT; Van Herreweghe 2002), Turkish Sign Language (TİD; Zeshan 2006), Austrian Sign Language (ÖGS; Lackner 2007), New Zealand Sign Language (NZSL; McKee & Wallingford 2011), Sign Language of the Netherlands (NGT; van Loon 2012), and German Sign Language (DGS; Volk 2017). In its canonical form, palm-​up is produced with either one or both open hands, five fingers extended flat or slightly curved. Starting from a downward or inward palm orientation, the hands rotate and end in an upward palm orientation. Variation in form has been addressed in Engberg-​Pedersen (2002), Conlin et  al. (2003), van der Kooij et  al. (2006), and McKee & Wallingford (2011), among others, and includes distributional features such as the combination with pointing, prosodic features such as final lengthening, and phonological features, such as variation in handshape and location. As McKee & Wallingford (2011: 223) note, however, a systematic relationship between form variants and functions of palm-​up in sign languages appears to be difficult to establish. Instead, functions are typically assigned to palm-​up based on its sequential positioning, co-​occurring non-​manual markers, and context, whereas form variants may be connected to the phonological environment of preceding and/​ or following signs, among other aspects (McKee & Wallingford 2011). As palm-​up is associated with various functions not only in sign languages, but also in spoken languages, its clear classification as a gesture versus a sign has been a matter of much debate (see Cooperrider et  al. (2018) for an overview, as well as Goldin-​Meadow & Brentari (2017) for a discussion of differences between gesture and sign). To account for its ambiguous status in-​between gesture and sign, we neutrally refer to palm-​up as a marker and gloss it, consistent with McKee & Wallingford (2011), in small letters rather than small caps. When palm-​up is used at the beginning of a turn, the hands are raised and turned to a signing position. Although this marker might be minimal in form, it has been described as sufficient to signal a signer’s intent to open a turn. If articulated more emphatically, the effect of palm-​up is analogous to an intake of breath or a vocalized ‘ah’ or ‘um’ of a nonsigner at the beginning of a turn (McKee & Wallingford 2011). In some cases, mouthings such as ‘well’ or ‘so’ may accompany palm-​up in turn-​opening contexts. In line with van Loon et  al. (2014), we consider these examples as discourse markers and discuss their properties in Section 22.3. An interesting example of palm-​up as a turn-​opener mirrored in constructed dialogue is reported for NZSL (McKee & Wallingford 2011: 233) and given in (1). In this example, a signer recounts asking a hearing friend to listen to her car engine and starts the constructed dialogue (marked by angle brackets ‘’) with palm-​up.

482

483

Discourse particles

(1) 

palm-​up NEED 2HELP1 ME DEAF palm-​up ‘I went to [name] and said “Look, I need you to help me” –​because I’m deaf, eh.’                 (NZSL, McKee & Wallingford 2011: 233) ME GO THERE [NAME] ME

Engberg-​Pedersen (2002) discusses similar cases in DSL and raises the question if palm-​up should be attributed to the signer in the present discourse frame or rather to the quoted signer within the constructed dialogue. McKee & Wallingford (2011) argue that the signer in (1) makes a brief pause after ME and shifts in eye gaze and posture to indicate role shift, so that palm-​up can be attributed to the constructed dialogue (for non-​manual markers of role shift, see Herrmann & Steinbach (2012), Lillo-​ Martin (2012), Steinbach, Chapter  16, among others). Accordingly, it is interpreted as a turn-​opener that the signer uses when she enacts asking her friend. In contrast, the sentence-​final palm-​up in (1) does not belong to the constructed dialogue and is addressed to the present addressee. This use of palm-​up can be analyzed as epistemic, as the signer is indicating obviousness. We will discuss this function of palm-​up in Section 22.4. At the end of a sentence, palm-​up can also indicate the end of a turn. Similar as in its function as a turn-​opener, palm-​up now serves as a boundary between signing and not signing. In this way, it marks a transition that returns the hands to a position lower than the signing space and restores their rest position. An example for palm-​up as a turn-​ender in NZSL (McKee & Wallingford 2011: 234) is given in (2). (2) 

palm-​up ‘S/​he and I will get together to discuss it.’ (NZSL, McKee & Wallingford 2011: 234) ME S/​HE WILL MEET DISCUSS

As a further turn-​taking signal, palm-​up functions as a pause filler. Pause fillers occur in case a signer holds the conversational floor during a pause when for example hesitating or losing their train of thought. This corresponds to a vocalized ‘um’ or ‘er’, parenthetical remarks such as ‘well’, ‘you know’, or ‘I mean’, as well as sound stretches in spoken languages (McKee & Wallingford 2011; van Loon 2012). By using palm-​up as a pause filler, the hands occupy the signing space and therefore indicate the signer’s intention to continue after having composed the next thought. Apart from palm-​up, the gesture finger-​wiggle (‘g-​finger-​wiggle’) may also be used as a pause filler in NZSL (McKee & Wallingford 2011) and DGS (Volk 2017). The example in (3)  from DGS contains two pause fillers indicating hesitation, first realized by finger-​wiggle and second by palm-​up. The two pause fillers used in (3) are also shown in Figures 22.2 and 22.3. In DGS, both palm-​up and finger-​wiggle often co-​occur with non-​manual markers such as pressed or pursed lips, eyeblinks, and a break in eye contact when used as a pause filler (Volk 2017). (3) 

g-​finger-​wiggle TALENT FOR palm-​up ARTS ‘He has to accept that he has a talent for arts.’ IX3a MUST ACCEPT IX3a

483

(DGS, Volk 2017)

484

Elisabeth Volk & Annika Herrmann

Figure 22.2  Finger-​wiggle used in (3)

Figure 22.3  Palm-​up used in (3)

Another function of pause fillers in NZSL (McKee & Wallingford 2011) and DGS (Volk 2017) is to mark the signer’s intention to hold the conversational floor despite an interruption by the interlocutor. In this case, the signer may use palm-​up and hold the hands in place during the interlocutor’s comment, thus signaling that her turn is not finished yet. Importantly, both functions of pause fillers signal the signer’s wish to continue her turn; the first indicates the signer’s hesitation, whereas the second shows a reaction to an interruption made by the interlocutor. Turn-​taking signals are also used by the interlocutor in the form of backchannels. These feedback elements may express an interlocutor’s involvement or agreement with the signer’s utterance and encourage the signer to continue (Baker 1977). In spoken languages such as English, this corresponds to the use of particles such as ‘yeah’, ‘uh-​huh’, ‘hm’, ‘right’, and ‘okay’ as feedback by the interlocutor (Ward & Tsukahara 1999). In sign languages, backchannels can be realized by palm-​up (Engberg-​Pedersen 2002; McKee & Wallingford 2011; van Loon 2012) and lexical signs such as OH-​I-​SEE, FINE, CORRECT, and YES (Martínez 1995; Volk 2017). Moreover, backchannels may co-​occur with non-​ manuals such as brow raise, pursed lips, head nods, head tilts, forward body leans, and shoulder shrugs (Martínez 1995; van Loon 2012; Volk 2017). For DGS, Papaspyrou et al. (2008: 72, 181) further mention the raising of one or two nasal wings to be a specific non-​ manual backchannel marker. An example of palm-​up as a backchannel signal in DGS is given in Figure 22.4 (Volk 2017). Here, it is the turn of the signer on the right side of the Figure who is saying that she is unsure how to express her opinion. The signer on the left uses palm-​up as a backchannel and encourages her interlocutor to go on.

484

485

Discourse particles

Figure 22.4  Palm-​up as a backchannel signal (Volk 2017)

Closely connected to backchannels are response particles that also assume functions at the discourse level. As stated above, backchannels indicate that the interlocutor is paying attention to the signer and give rather passive feedback while respecting the signer’s right to the floor. Response particles, however, are used to actively take over the turn to affirm or reject an assertion and to answer a question. In the following, we will briefly describe how response particles are interpreted in spoken languages and then turn to their distribution in sign languages. In response to an assertion or a question with a positive polarity as in (4a), English response particles (RPs) such as ‘yes’ and ‘no’ are possible. (4) a. b. c.

A: You stole the cookie. /​Did you steal the cookie? B: Yes.  = B did steal the cookie. AP: true RP: positive B: No. = B didn’t steal the cookie. AP: false RP: negative (Krifka 2013: 2)

In (4b), the answer ‘yes’ confirms the truth of the antecedent proposition (AP), whereas the answer ‘no’ in (4c) rejects it (false). Moreover, ‘yes’ signals the positive polarity of B’s response, whereas ‘no’ signals that the response has negative polarity. In relation to assertions or questions with a positive polarity as in (4a), response particles are therefore unambiguous for interpretation (Krifka 2013). In contrast, when an assertion or a question is negated, it has been proposed that the interpretation of response particles depends on whether a language has a polarity-​based or a truth-​based answering system (Kuno 1973; Pope 1976; Jones 1999; Levinson 2010; Holmberg 2013). English has been argued to be polarity-​based (Kuno 1973; Jones 1999). In response to negative questions such as (5a), English response particles are therefore expected to signal the polarity of the (elliptical) response clause rather than to express (dis)agreement with the interlocutor (González-​Fuente et  al. 2015). Accordingly, the positive response particle ‘yes’ in (5b) precedes a positive response clause (‘Yes, I did.’), and the negative response particle ‘no’ in (5c) precedes a negative response clause (‘No, I didn’t.’). Note that the truth of the antecedent negative proposition in (5a) is confirmed (= true) using the particle ‘no’ but rejected using the particle ‘yes’.

485

486

Elisabeth Volk & Annika Herrmann

(5)  a. A: Did you not steal the cookie? b.  B:  Yes (, I did). = B did steal the cookie. AP: false  RP: positive c. B: No (, I didn’t).  = B didn’t steal the cookie. AP: true RP: negative (Krifka 2013: 2) Truth-​based languages such as Cantonese, on the other hand, use ‘yes’-​type particles to confirm the truth of an antecedent negative proposition (6b) and ‘no’-​type particles to reject the truth of the antecedent (6c) (Holmberg 2013: 2-3). The polarity of the response clause does not influence the choice of response particles (Pope 1976; González-​Fuente et al. 2015). (6)  a.  A:  Keoi-​dei m jam   gaafe? he/​she-​PL  not drink coffee ‘Do they not drink coffee?’ b. B: Hai. yes ‘Yes (, they don’t drink coffee).’ c. B: M  hai. not yes ‘No (, they drink coffee).’

AP: true  RP: positive

AP: false RP: negative (Cantonese, Holmberg 2013: 2-3)

Despite the described differences in answering systems, it has been shown that a strict separation between polarity-​based and truth-​based languages usually does not hold. Rather, languages may use both systems but exhibit a preference for one over the other. The use of one system over another may be influenced by a range of factors such as the syntactic properties of the question as well as prosodic and gestural patterns (Holmberg 2013; González-​Fuente et al. 2015; Goodhue & Wagner 2018). As shown in (7), this also applies to English. Hence, the interpretation of response particles preferentially follows a polarity-​based system (7b–​c) but truth-​based interpretations are possible (7d–​e) (Krifka 2013: 2). In this sense, both ‘yes’ and ‘no’ can be used to confirm the antecedent proposition in (7a), so that the use of English response particles becomes ambiguous with negative antecedents (Claus et al. 2017). (7)  a.  b. c. d. e.

A: B: B. B: B:

Did you not steal the cookie? Yes (, I did). = B did steal the cookie. No (, I didn’t). = B didn’t steal the cookie. Yes (, I didn’t).  = B didn’t steal the cookie.  No (, I did). = B did steal the cookie.

AP: false AP: true AP: true AP: false 

RP: positive RP: negative RP: positive RP: negative (Krifka 2013: 2)

Little is known about the inventory of response particles and their interpretation in sign languages. First insights come from ASL and DGS. González et  al. (2018, 2019) investigate the use of the particles YES, NO, NEVER, NOTHING, and NONE in response to polar questions in ASL. They argue that ASL follows a mixed system as also proposed for English and shown in (8). Response particles are either interpreted according to a polarity-​based system as can be inferred from (8c) and (8d), or according to a truth-​based

486

487

Discourse particles

system as given in (8e) and (8f).3 Thus, both NO (8d) (polarity-​based response) and YES (8e) (truth-​based response) can be used to confirm the antecedent proposition given in (8b).



(8)

     hs a. Amy: ZOE PLAY VIDEO GAME NEVER ‘Zoe never plays video games.’     br b. Ben (to Zoe): IXZoe NEVER ‘You never play video games?’   hn c. Zoe: YES, IXZoe ONCE-​IN-​A-​WHILE ‘Yes, I play video games once in a while.’  hs     hs d. Zoe: NO, IXZoe NEVER ‘No, I never play video games.’ hn    hs e. Zoe: YES, IXZoe NEVER ‘Yes, I never play video games.’ hs f. Zoe: NO, IXZoe ONCE-​IN-​A-​WHILE ‘No, I play video games once in a while.’

AP: false  RP: positive

AP: true  RP: negative

AP: true  RP: positive

AP: false  RP: negative (ASL, González et al. 2018)

In their investigation of response particles in DGS, Loos et al. (2019, 2020) show that also in DGS, signers may generally choose between truth-​based and polarity-​based responses to negative assertions. However, DGS signers seem to prefer a truth-​based strategy in their responses. Accordingly, they tend to confirm negative assertions, as in (9a), by using positive responses such as YES, CORRECT, and head nods (9b), whereas negative responses, such as NO and NOT-​YET (9c), are rarely used to do so. Similarly, when DGS signers intend to reject a negative assertion, they also rather choose a truth-​based answer, i.e., NO, WRONG, CORRECT-​NEG, and headshakes (9d), whereas positive responses such as YES and head nods (9e) occur less frequently. A systematic exception to this observation is the manual sign YES combined with the mouthing /​doch/​. In German, ‘doch’ serves as a dedicated particle for rejecting negative assertions in a tripartite response particle system. Some DGS signers consistently reject negative assertions using YES with the mouthing ‘doch’, which may suggest that they, too, use a tripartite response particle system.           br    hs

(9)

a. b.

A: GARDENER LAWN SOW  NOT-​YET ‘The gardener hasn’t sown the lawn yet.’ B: YES ‘Yes (, the gardener hasn’t sown the lawn yet).’

AP: true 

RP: positive

hs

c.

B:

NO

‘No (, the gardener hasn’t sown the lawn yet).’

AP: true   RP: negative

hs

d. B:

NO

‘No (, the gardener has already sown the lawn).’ 487

AP: false  RP: negative

488

Elisabeth Volk & Annika Herrmann

e.

B:

YES

‘Yes (, the gardener has already sown the lawn).’

AP: false  RP: positive (DGS, Loos et al. 2019)

Loos et al. (2019, 2020) further note that in rare cases, polarity-​and truth-​based responses can also be combined in DGS. For instance, it is possible to confirm a negative assertion by a combination of a positive manual response such as CORRECT (truth-​based) and a headshake (polarity-​based).

22.3  Coherence Discourse develops through utterances of a single signer or by more than one signer in interaction. The order and logical connections between these units of information contribute to the discourse structure and create or maintain coherence to convey global meaning beyond the sentence level. In this section, we discuss discourse particles that function as discourse markers in sign languages to connect utterances in discourse, structure a text into sequential segments, and signal a change in discourse topics. Moreover, we briefly cover those discourse particles which indicate focus marking (for syntactic and prosodic focus in sign languages, see Kimmelman & Pfau, Chapter 26; for discourse anaphora and the analysis of pronouns and co-​reference, see Kuhn, Chapter 21). In order to connect signs, clauses, or sentences with each other, palm-​up can be used in sign languages such as British Sign Language (BSL; Waters & Sutton-​Spence 2005), NZSL (McKee & Wallingford 2011), Norwegian Sign Language (NSL; Amundsen & Halvorsen 2011), NGT (van Loon 2012), DGS (Volk 2017), and DSL (Engberg-​Pedersen 2002, 2020). Palm-​up may convey propositional relations of coordination, temporal sequence, contrast, and causation, which are interpreted from discourse context, non-​manual markers, and/​ or simultaneous mouthings (McKee & Wallingford 2011). An example of palm-​up as a connective in DGS is given in (10) (Volk 2017; prosodic breaks are indicated by a colon ‘:’).      hn

(10)

A:

IX1 THINK 24 HOUR GOOD

                                                 

 puzzled    bf

IX1 10-​O’CLOCK 11-​O’CLOCK 12-​O’CLOCK IX1 GO-​INSIDE IX1

palm-​up

ht-​l, ‘aber’       

B: A: B:



‘aber’

palm-​up : CASUAL GO-​INSIDE GO-​AROUND palm-​up

hn

  hn

palm-​up IX1 MYSELF ALREADY EXPERIENCE IX1 : ADMIT ‘I think it’s good if supermarkets are open 24 hours. You can go inside and around casually, but I’m not sure if I would go at ten, eleven or twelve p.m.’ ‘But I already did that, I have to admit.’       (DGS, Volk 2017)

In (10), A and B are discussing whether supermarkets in Germany should be open 24 hours. Signer A  actually uses palm-​up three times here. The first palm-​up, which is accompanied by head nods (‘hn’), can be interpreted as an evaluative marker as the signer first makes an evaluative statement and then confirms it at the end of the clause. The last palm-​up at the end of A’s utterance represents an affective marker and can be translated as an expression of uncertainty (for palm-​up functions related to the attitude of the signer, see also Section 22.4). The second palm-​up, however, connects two clauses 488

489

Discourse particles

with each other. Moreover, it co-​occurs with a mouthing of the German conjunction aber (engl. ‘but’) and indicates a relation of contrast between the clauses. Palm-​up as a connective does not only express propositional relations but is also used as a discourse marker connecting utterances of different interlocutors. Signer B in (10) produces palm-​up at the beginning of his turn to refer back to the last statement of signer A. When he is tilting his head to the left side (‘ht-​l’) and mouthes ‘aber’ (engl. ‘but’), he creates a relationship of contrast that is interpreted at the discourse level. Similar functions of palm-​up are also reported for NZSL (McKee & Wallingford 2011), NGT (van Loon et al. 2014), and DSL (Engberg-​Pedersen 2002, 2020), and are usually translated as a turn-​initial ‘well’. As presented in Figure 22.5, DGS further includes the lexical signs BUT and OTHER-​SIDE (‘on the other hand’) as contrastive discourse markers as well as ALSO, OR, and PLUS as additive discourse markers (Volk 2017).

OTHER-SIDE

BUT

ALSO

OR

PLUS

Figure 22.5  Contrastive and additive discourse markers in DGS (Volk 2017)

Crucially, the discourse markers presented above do not only appear turn-​initially but are also used at the beginning of a new utterance within the turn of a single signer. In this sense, discourse markers can connect discourse segments of a turn and build relations among them (Volk 2017). A  related function of palm-​up is often termed as an ‘elaborative marker’, with reference to Fraser (1999:  948). In this case, palm-​up introduces a segment that specifies or augments a previous predication by, for instance, making a comment, adding further information, or giving an example (Amundsen & Halvorsen 2011; McKee & Wallingford 2011; van Loon 2012). Accordingly, palm-​up in (11) (McKee & Wallingford 2011:  229) introduces an evaluative comment (OLD-​FASHIONED) which refers back to the first part of the statement. 489

490

Elisabeth Volk & Annika Herrmann

(11) 

palm-​up OLD-​FASHIONED ‘They [the school] didn’t allow shaved heads (which was) old fashioned.’             (NZSL, McKee & Wallingford 2011: 229) THEY NOT-​ALLOW SHAVE-​HEAD NOT-​ALLOW

Discourse markers in sign languages also indicate the discourse structure and split up utterances of a signer into sequential segments. An example for ASL is the sign NOW-​ THAT (Roy 1989). NOW-​THAT is described as a two-​handed sign that includes a one-​handed variant of NOW with the other hand simultaneously producing THAT. It can be used, for instance, to divide a lecture into three parts, i.e., the introduction, the main body, and the conclusion, and appears at the beginning of the first utterance of each part. Gabarró-​ López et al. (2016) associate a similar discourse marker with so-​called ‘list-​buoys’ (Liddell 2003), which are numeral signs from one to five produced by the non-​dominant hand. Gabarró-​López et  al. (2016) argue that such list-​buoys function as discourse markers in French Belgian Sign Language (LSFB) in case they divide the discourse into larger segments.4 Another discourse marker of ASL with a related discourse structuring function is the sign FINE. As shown in (12), FINE separates different events of a longer stretch of discourse; unlike NOW-​THAT and list-​buoys, however, it is argued to occur only utterance-​finally at the end of each event (Metzger & Bahan 2001: 132). (12)

ANNOUNCE HAVE TIME TIME-​NINE-​O’CLOCK IX SILENCE FOR ONE MINUTE. FINE. DURING POSS1 CLASS TIME EIGHT-​TO TEN. FINE.

‘We were all told that there was time set aside for a moment of silence, at nine o’clock. Okay. That happened to be during the time I was teaching, since my class met from 8 to 10 o’clock. Okay.’ (ASL, Metzger & Bahan 2001: 132) Discourse markers may also draw attention to a new subtopic. This applies to the sign NOW in ASL (Roy 1989). In contrast to the temporal sign NOW, Roy (1989: 236) explains that the discourse marker only appears at the beginning of an utterance and can occur as part of a topic marking, as demonstrated in (13), as well as with a body shift.     

(13)

br

NOW CL:FISH IX3 TRUE STRANGE IX3

‘Now, as for the fish, it is truly unique, it is.’              (ASL, Roy 1989: 236)

Other discourse markers that have been proposed to signal a change in discourse topic or to mark a transition between discourse topics are (i) the sign ON-​TO-​THE-​NEXT-​PART in ASL (Roy 1989), (ii) the sign HEY in ASL, which is produced with one hand waving up and down and facing palm-​downwards (Hoza 2011), (iii) a sign in NGT and LSE that involves moving both hands with a five-​handshape from the contralateral to the ipsilateral side (Quer et al. 2017), and (iv) palm-​up in DSL (Engberg-​Pedersen 2002). While the discourse markers above refer to the macrostructure, there are also particles in sign languages operating on the microstructure of discourse that refer to information-​ structural aspects. For instance, palm-​up can be used in DSL to connect a topic and a predicate with a consequent focus effect. Engberg-​Pedersen (2002) argues that in this case, the topic is metaphorically presented for consideration in relation to the upcoming predicate. Also, Conlin et al. (2003) mention that palm-​up may occur after a topic phrase in ASL (for information-​structural aspects of topic and focus, see Kimmelman & Pfau, Chapter 26). 490

491

Discourse particles

Particles marking additive, restrictive, or scalar focus have been investigated for different sign languages including ASL (Wilbur & Patschke 1998), DGS, NGT and Irish Sign Language (Irish SL; Herrmann 2013, 2015; Kimmelman 2019), and Russian Sign Language (RSL; Kimmelman 2019). Focus particles associate with and directly relate to a highlighted constituent of a sentence (König 1991). Within alternative semantics, the meaning of focus particles is defined in terms of inclusion and exclusion of alternatives (Rooth 1992; Büring 2007).5 Additive particles such as ‘also’ and ‘as well’ and restrictive particles such as ‘only’ and ‘just’ usually have manual equivalents in sign languages. ASL has a manual sign ONLY that is accompanied by a body lean backwards (‘b-​bl’) for exclusion, as illustrated in (14) (Wilbur & Patschke 1998: 284). Additive particles, on the other hand, often exhibit a forward body lean for inclusion. In ASL, focus particles are found to either precede or follow their constituent. In DGS, their distribution can mostly be accounted for by assuming an adverbial analysis, and the focus particles mostly appear preceding and adjacent to the focus constituent or the verb phrase (VP) if the VP or V is in focus. In rare cases, some specific grammaticalized focus particles may appear sentence-​finally, as in (15), where the subject ‘Tim’ is accompanied by brow raise (‘br’) as focus marking (Herrmann 2013: 249).    (14) 

  b-​bl

KINGS ONLY MEET…

‘Kings only meet…’

(ASL, Wilbur & Patschke 1998: 284)

    br

(15) 

TIM IX3 BANANA EAT ONLY

‘Only TimF eats a banana.’

(DGS, Herrmann 2013: 249)

The sentence-​final instances in DGS can be explained by assuming a right-​headed C° analysis as a focus position following Petronio & Lillo-​Martin’s (1997) analysis of wh-​words in ASL, so that the focus particles may take scope over their associated constituents. For further doubling phenomena and combinatory examples of focus particles, see Herrmann (2013). Scalar focus particles are a specific case of particles in sign languages as their different levels of meaning (additive and scalar) are not realized by a separate sign, but by a combination of an additive particle and non-​manual expressions (Wilbur & Patschke 1998; Herrmann 2013). This compositional realization of different meaning contributions is facilitated by the simultaneous structures afforded by the visual modality, as it seems to recur across many sign languages (possibly a modality effect, see Kimmelman 2019), thus providing interesting insights for spoken language theories of compositional meaning.

22.4  Modal meaning In terms of modal meaning, discourse elements such as modals, modal particles, tag questions, and intonational patterns may be used to indicate, for example, evidentiality, epistemicity, and the strengthening and softening of utterances. This section concentrates on the realization of modal meaning in sign language discourse with an emphasis on respective particles and their equivalents.

491

492

Elisabeth Volk & Annika Herrmann

Ferreira-​Brito (1990) lists various equivalent signs in Brazilian Sign Language (Libras) for Portuguese modals, such as modal verbs, sentence-​final signs (e.g., OBVIOUS), and modal adjectives (e.g., IMPOSSIBLE, OBLIGATORY, OPTIONAL). In terms of modality, she says that a sign such as OPTIONAL may be used in both deontic and epistemic readings and that there are explicit lexical alternatives for both meanings, as well as degree variants for these expressions which involve movement alternations. Research on ASL has shown that some manual deontic modals (MUST, SHOULD, CAN) may be used epistemically in a limited number of contexts, indicating a grammaticalization process with phonological changes leading to reduplicated and cyclic forms (Wilcox 1996). Schaffer (2002) finds that deontic modals in ASL appear in preverbal position whereas modals in sentence-​final position rather trigger epistemic readings (also see Schaffer & Janzen 2016). Furthermore, Wilcox & Wilcox (1995:  144–​148) discuss specific epistemic modals and their independent developments, such as SEEM (derived from MIRROR), but also state that tag questions and non-​manual markers are frequently used in modal contexts. Non-​manuals exhibit a different syntactic distribution depending on deontic versus epistemic use; they are restricted to the VP with deontic modals but either aligned with the sign or spreading over the entire clause when co-​occurring with epistemic modals. Bross & Hole (2017) suggest a natural mapping of certain semantic scope-​taking operations on the signer’s body from lower manual areas for root modals to upper non-​ manual parts of the body (e.g., brow movements) for higher operators and apply a cartographic account within the framework of Generative Grammar. Their generalizations are, however, quite strong, as many elements taking scope over a proposition, such as, for instance, sentential adverbials and epistemic modal signs, may be either expressed by manual means or non-​manual means. Work on modal meaning in Spanish Sign Language (LSE; Salazar 2008; Cabeza-​Pereiro 2013) supports the claim that there are manual modals with epistemic readings. LSE has a sign for ‘must’ that can also be used in both modal senses. Moreover, non-​manuals alone (brow furrow) may be used for deontic necessity and epistemic stance may be accomplished by using brow furrow in combination with pursed lips. Modal meaning has further been investigated in Iranian Sign Language (ZEI) by Siyavoshi (2019), who adopts a Cognitive Grammar framework for modal meaning and also investigates the gestural origins of signs. The role of non-​manuals, in particular lip movements, for the expression of modal meaning is noted by Gianfreda et al. (2014) for Italian Sign Language (LIS). Lackner (2017) provides a detailed analysis of non-​manual markings with modal meaning in ÖGS, including head nods, headshakes, and body leans. In a recent study, Engberg-​Pedersen (2020) looks at discourse particles for (un)certainty in the field of epistemic modality in two unrelated sign languages, i.e., DSL and Japanese Sign Language. In several sign languages including ASL (Conlin et  al. 2003), NZSL (McKee & Wallingford 2011), NGT (van Loon 2012), and DGS (Volk 2017), palm-​up may combine with specific non-​manual signals such as body leans, shoulder shrugs, head tilts, brow movements, and lip movements to express evaluative and epistemic meaning (e.g., uncertainty, doubt, ignorance, and obviousness). An example of such a modal use of palm-​up in DGS is given in (16) from the data set of Volk (2017). While the first palm-​up in this example corresponds to a pause filler indicating hesitation (see Section 22.3), the sentence-​final palm-​up, which is produced with a backward body lean (‘b-​bl’) and downward corners of the mouth (‘mc-​d’), can be interpreted as a modal marker of uncertainty.

492

493

Discourse particles

b-​bl, mc-​d

(16) 

palm-​up INTEREST-​LESS palm-​up ‘There also might be students who, uhm, are not interested.’ (DGS, Volk 2017) THERE-​ARE ALSO SCHOOL-​PAM++ IX-​3PL

DGS signers also use other lexical signs such as YES and CORRECT to express epistemic meaning. In (17), the sign YES is used at the end of a parenthetical clause that is marked non-​manually by a leftward head-​tilt (‘ht-​l’) and head nod (‘hn’). Here, the sign YES indicates the signer’s commitment to the truth of the proposition.    

(17)

ht-​l, hn

g-​finger-​wiggle DESCRIPTION : MORE WORK YES : […] ‘School grades should be abolished and substituted by, uhm, descriptions, which is of course more effort, …’ (DGS, Volk 2017) GRADE ABOLISH FOR-​THAT

Strengthening or softening of utterances as one would do with modal particles may be realized by modifying the articulation of the manual sign or by adding non-​manuals (Schaffer 2004; Herrmann 2007, 2013). With regard to modal particles, Herrmann (2013) found no manual equivalents to strengtheners or softeners in DGS, NGT, or Irish SL, except for alternative strategies such as sentential adverbs and tag questions. Rather, specific non-​manuals such as brow raise (‘br’), chin down (‘cd’), slow head nods (‘hn’), and mouth corners down (‘mc-​d’) scope over the respective utterances and systematically combine to create complex meanings. Example (18) and Figure 22.6 show examples of certain non-​manuals for uncertainty in the context of a surprising fact that triggers an uncertain declarative about the reason. The signer is not committed to the truth of the proposition. Depending on the interpretation of the signer, various degrees of uncertainty may be expressed (see example (17) for certain commitment to the truth of the proposition). Mouth corners down, for instance, often appears in highly uncertain interpretations (see Figure 20.6b, which also involves shrugged shoulders).       

(18)

  br



     

  bf ht-​f, hn

[…] palm-​up POSS-​2 CAR SELL palm-​up ‘[Strange, Tim is walking.] He has (Germ. wohl /​Engl. probably) sold his car.’ (DGS, Herrmann 2013)

a.

b.

c.

Figure 22.6  Facial expressions for uncertainty on the same target sentence (‘Tim has sold his car.’) including the softening German modal particle ‘wohl’ (Herrmann 2013: 98, 130). The context triggers similar non-​manuals (brow raise (‘br’), mouth corners down (‘mc-​d’), head-​tilt forward (‘ht-​f ’), and slow head nods (‘hn’) depending on the degree of uncertainty and a frequent use of palm-​up

493

494

Elisabeth Volk & Annika Herrmann

Taking a compositional approach along the lines of Dachkovsky & Sandler (2009), non-​manual features convey certain broader pragmatic meanings and contribute to the modification of utterances. Brow raise is used for continuation as well as surprise and mouth corners down usually appear to mark skepticism in the context of softening the proposition (for further very systematic form-​meaning relations, also see the interpretation of ‘squint’ in Dachkovsky & Sandler (2009) for ISL and Herrmann (2015) for DGS). Whether the spread of non-​manuals aligns with syntactic or prosodic constituents has been subject to much debate (Sandler 2010). In addition, Wilbur (2011) suggests a semantic operator analysis for the spreading behavior of certain non-​manuals (see Wilbur, Chapter 24, for a discussion of theoretical analyses of non-​manual markers in general). Still, various researchers agree on the fact, that the systematicity of the linguistic use of specific non-​manual means and their distribution are immanent, even in the field of modal meaning that tightly interacts with gestural elements.

22.5  Conclusion Discourse particles and their functions in discourse regulation, coherence, and modal meaning in sign languages fall within the domain of research on pragmatics and discourse in general. Compared to other topics in sign language research, the results in this field are still scarce. In addition, there is a need for investigations based on naturalistic and corpus-​based data to truly describe and account for the elements that regulate discourse relations. The difficulty of disentangling gestural from grammatical manual modifications of signs as well as affective from linguistic non-​manual elements often complicates identifying unique form-​meaning mappings for discourse particles. However, the systematicity with which these elements are used, their systematic distributions, grammaticalization patterns, and the combinatory nature of those means have been explained by various formal analyses. The multifaceted palm-​up, which we explicitly neither call a sign nor a gesture in this chapter, shows that this element appears in all three discourse categories. Some researchers therefore analyze it as a prosodic boundary marker, some as turn-​enders, van der Kooij et  al. (2006:  1) call it ‘a semantically and pragmatically empty manual form’ whose meaning is only derived in combination with the aligned non-​manual elements. Amundsen & Halvorsen (2011) note its association with expressions of affective meaning. McKee & Wallingford (2011), van Loon (2012), and Volk (2017) also note that the age of a signer is a factor in the use of palm-​up and its respective discourse functions. Van Loon et al. (2014) further suggest a grammaticalization chain for palm-​up in sign languages; however, generative accounts of where to locate palm-​up in the phrase structure are still missing. Looking at modality-​independent and modality-​specific aspects of discourse particles in sign languages, we note that the availability of multiple articulators, the prevalence of simultaneity in sign languages, and the pragmatic nature of discourse regulation, coherence, and modal meaning often favors non-​manual markers over sequential manual forms, so that discourse relations may be expressed only non-​manually. Still, this does not mean that we would not find manual signs, for example even for epistemic modals, in many sign languages. Looking at those elements across many sign languages, we find

494

495

Discourse particles

the entire spectrum from manual to non-​manual means for each category. On the one hand, we find some interesting modality effects as certain pragmatic meanings seem to be similar across some sign languages; on the other hand, language-​specific aspects become apparent especially when comparing sign languages with different origins. Thus, more typological research is needed to clarify the complex picture of discourse particles in sign languages.

Notes 1 Baker (1977) and Baker-​Shenk & Cokely (1980) consider eye contact as obligatory to initiate a turn in sign languages. However, Coates & Sutton-​Spence (2001) argue that Deaf signers do not need to establish eye contact in conversations due to peripheral vision. 2 Just as in spoken languages, it is also possible in sign languages to have turn overlaps. For quantitative and qualitative research on turn overlaps in sign languages as well as a discussion of whether a one-​at-​a-​time mode or rather a ‘collaborative’ mode is predominant in signed conversations see McIlvenny (1995), Coates & Sutton-​Spence (2001), Mesch (2001), McCleary & de Arantes Leite (2013), Groeber & Pochon-​Berger (2014), and de Vos et al. (2015). 3 González et al. (2018, 2019) also consider the distribution of response particles below the discourse level, namely in question-​answer pairs. Question-​answer pairs are utterances that contain a question constituent (Q) and an answer constituent (A) that is produced by the same signer to make a statement (Caponigro & Davidson 2011). If the question constituent contains a negative polarity item, such as NEVER in (i), the answer constituent cannot confirm the truth of the question constituent as shown by the ungrammaticality of (ib) and (ic). Instead, the answer must reject its truth, as in (ia) and (id). However, both a polarity-​based (ia) and a truth-​based strategy (id) are possible. For further discussion of the syntactic and prosodic structure of question-​ answer pairs in sign languages, see Hoza et  al. (1997), Wilbur (1996), Davidson et  al. (2008), Kimmelman & Vink (2017), and Herrmann et al. (2019). br

(i)

hn

a. Zoe: [Q IXZoe PLAY VIDEO GAME NEVER], [A YES ONCE-​IN-​A-​WHILE] ‘I do play video games once in a while.’   AP: false    RP: positive br

hs

b. Zoe: * [Q IXZOE PLAY VIDEO GAME NEVER], [A NO NEVER] (‘I never play video games.’) AP: true   RP: negative br

hn

hs

c. Zoe: * [Q IXZoe PLAY VIDEO GAME NEVER], [A YES NEVER] (‘I never play video games.’) AP: true   RP: positive br

hs

d. Zoe: [Q IXZoe PLAY VIDEO GAME NEVER], [A NO ONCE-​IN-​A-​WHILE] ‘I do play video games once in a while.’ AP: false  RP: negative (ASL, González et al. 2018) 4 Gabarró-​López et al. (2016) further explain that list-​buoys may also enumerate signs or short strings of signs, and they distinguish this local use from its global function as a discourse marker. In line with this, Davidson (2013) labels list-​buoys that indicate local enumeration as coordinate elements and analyzes them at the syntax-​semantics interface. 5 In general, focus particles may be subsumed under pragmatic uses of focus, as most of them are non-​truth-​conditional. Thus, we take them to be discourse particles. Note, however, that some truth conditional effects of restrictive particles have been discussed under the notion of semantic uses of focus or bound focus (Jacobs 1984; König 1991).

495

496

Elisabeth Volk & Annika Herrmann

References Amundsen, Guri & Rolf Piene Halvorsen. 2011. Sign or gesture? Two discourse markers in Norwegian Sign Language (NSL). Talk presented at the Annual Meeting of the German Linguistic Society (DGfS), Göttingen, March 2011. Baker, Anne & Beppie van den Bogaerde. 2012. Communicative interaction. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 489–​512. Berlin: De Gruyter Mouton. Baker, Anne & Beppie van den Bogaerde. 2016. Interaction and discourse. In Anne Baker, Beppie van den Bogaerde, Roland Pfau, & Trude Schermer (eds.), The linguistics of sign languages: An introduction, 73–​91. Amsterdam: John Benjamins. Baker, Charlotte. 1977. Regulators and turn-​taking in American Sign Language discourse. In Lynn A. Friedman (ed.), On the other hand: New perspectives on American Sign Language, 215–​236. New York: Academic Press. Baker-​Shenk, Charlotte Lee, & Dennis Cokely. 1980. American Sign Language: A teacher’s resource text on grammar and culture. Washington, DC: Gallaudet University Press. Bross, Fabian & Hole, Daniel. 2017. Scope-​taking strategies in German Sign Language. Glossa. A Journal of General Linguistics. 2(1). 76. Büring, Daniel. 2007. Semantics, intonation and information structure. In Gillian Ramchand & Charles Reiss (eds.), The Oxford handbook of linguistic interfaces, 445–​474, Oxford:  Oxford University Press. Cabeza-​Pereiro, Carmen. 2013. Modality and linguistic change in Spanish Sign Language (LSE). CogniTextes 10. Caponigro, Ivano & Kathryn Davidson. 2011. Ask, and tell as well:  Question-​answer clauses in American Sign Language. Natural Language Semantics 19 (4). 323–​371. Claus, Berry, A. Marlijn Meijer, Sophie Repp, & Manfred Krifka. 2017. Puzzling response particles: An experimental study on the German answering system. Semantics and Pragmatics 10(19). 1–​52. Coates, Jennifer & Rachel Sutton-​Spence. 2001. Turn-​taking patterns in deaf conversation. Journal of Sociolinguistics 5(4). 507–​529. Conlin, Frances, Paul Hagstrom, & Carol Neidle. 2003. A particle of indefiniteness in American Sign Language. Linguistic Discovery 2(1). 1–​29. Cooperrider, Kensy, Natasha Abner, & Susan Goldin-​Meadow. 2018. The palm-​up puzzle: Meanings and origins of a widespread form in gesture and sign. Frontiers in Communication 3: 23. Dachkovky, Svetlana & Wendy Sandler. 2009. Visual intonation in the prosody of a sign language. Language and Speech 52(23). 287–​314. Davidson, Kathryn. 2013. ‘And’ or ‘or’: General use coordination in ASL. Semantics and Pragmatics 6.  1–​44. Davidson, Kathryn, Ivano Caponigro, & Rachel Mayberry. 2008. The semantics and pragmatics of clausal question-​answer pairs in ASL. In Tova Friedman & Satoshi Ito (eds.), Semantics and Linguistic Theory (SALT) 18, 212–​229. Ithaca, NY: Cornell University. de Vos, Connie, Francisco Torreira, & Stephen C. Levinson. 2015. Turn-​ timing in signed conversations: Coordinating stroke-​to-​stroke turn boundaries. Frontiers in Psychology 6. 268. Engberg-​Pedersen, Elisabeth. 2002. Gestures in signing: The presentation gesture in Danish Sign Language. In Rolf Schulmeister & Reimo Reinitzer (eds.), Progress in sign language research: In honor of Siegmund Prillwitz, 143–​162. Hamburg: Signum Verlag. Engberg-​Pedersen, Elisabeth. 2020. Markers of epistemic modality and their origins:  Evidence from two unrelated sign languages. Studies in Language. Ferreira-​Brito, Lucinda. 1990. Epistemic, aletic, and deontic modalities in Brazilian Sign Language. In Susan D. Fischer & Patricia Siple (eds.), Theoretical issues in sign language research, Vol. 1: Linguistics, 229–​260. Chicago: University of Chicago Press. Fraser, Bruce. 1999. What are discourse markers? Journal of Pragmatics 31(7). 931–​952. Gabarró-​López, Sílvia, Laurence Meurant, & Gemma Barberà. 2016. Digging into buoys: Their use across genres and their status in signed discourse. Poster presented at Theoretical Issues in Sign Language Research (TISLR) 12, Melbourne, January 2016. Gianfreda, Gabriele, Virginia Volterra, & Andrzej Zuczkowski. 2014. L’espressione dell’incertezza nella Lingua dei Segni Italiana (LIS). Ricerche di Pedagogia e Didattica –​Journal of Theories and Research in Education 9(1). [Special Issue, ed. by Andrzej Zuczkowski & Letizia Caronia,

496

497

Discourse particles Communicating certainty and uncertainty:  Multidisciplinary perspectives on epistemicity in everyday life]. Goldin-​Meadow, Susan & Diane Brentari. 2017. Gesture, sign, and language: The coming of age of sign language and gesture studies. Behavioral and Brain Sciences 40. E46. González, Aurore, Kate Henninger, & Kathryn Davidson. 2018. Answering negative questions in American Sign Language. Talk presented at NELS 49, Cornell University, October 2018. González, Aurore, Kate Henninger, & Kathryn Davidson. 2019. Answering negative questions in American Sign Language. In Maggie Baird & Jonathan Pesetsky (eds.), NELS 49: Proceedings of the Forty-​Ninth Annual Meeting of the North East Linguistic Society 2, 31–​44. González-​Fuente, Santiago, Susagna Tubau, M. Teresa Espinal, & Pilar Prieto. 2015. Is there a universal answering strategy for rejecting negative propositions? Typological evidence on the use of prosody and gesture. Frontiers in Psychology 6. 899. Goodhue, Daniel & Michael Wagner. 2018. Intonation, yes and no. Glossa: A Journal of General Linguistics 3(1). 5. Groeber, Simone & Evelyne Pochon-​Berger. 2014. Turns and turn-​taking in sign language interaction: A study of turn-​final holds. Journal of Pragmatics 65. 121–​136. Herrmann, Annika. 2007. The expression of modal meaning in German Sign Language and Irish Sign Language. In Pamela Perniss, Roland Pfau, & Markus Steinbach (eds.), Visible variation. Comparative studies on sign language structure, 245–​278. Berlin: Mouton de Gruyter. Herrmann, Annika. 2013. Modal and focus particles in sign languages:  A cross-​linguistic study. Berlin: De Gruyter Mouton. Herrmann, Annika. 2015. The marking of information structure in German Sign Language. In Frank Kügler & Stefan Baumann (eds.), Prosody and information status in typological perspective. Special Issue of Lingua 165, Part B. 277–​297. Herrmann, Annika, Sina Proske, & Elisabeth Volk. 2019. Question-​answer pairs in sign languages. In Malte Zimmermann, Klaus von Heusinger, & Edgar Onea (eds.), Questions in discourse (Vol. 2: Pragmatics), 96–​131. Leiden: Brill. Herrmann, Annika & Markus Steinbach. 2012. Quotation in sign languages:  A visible context shift. In Isabelle Buchstaller & Ingrid van Alphen (eds.), Quotatives. Cross-​linguistic and cross-​ disciplinary perspectives, 203–​228. Amsterdam: John Benjamins. Holmberg, Anders. 2013. The syntax of answers to polar questions in English and Swedish. Lingua 128.  31–​50. Hoza, Jack. 2011. The discourse and politeness functions of HEY and WELL in American Sign Language. In Cynthia B. Roy (ed.), Discourse in signed languages, 69–​ 95. Washington, DC: Gallaudet University Press. Jacobs, Joachim. 1984. Funktionale Satzperspektive und Illokutions-​ Semantik. Linguistische Berichte 91. 25–​58. Jones, Bob Morris. 1999. The Welsh answering system. Berlin: De Gruyter Mouton. Kimmelman, Vadim. 2019. Information structure in sign languages:  Evidence from Russian Sign Language and Sign Language of the Netherlands. Berlin: De Gruyter Mouton & Ishara Press. Kimmelman, Vadim & Lianne Vink. 2017. Question-​ answer pairs in Sign Language of the Netherlands. Sign Language Studies 17(4). 417–​449. König, Ekkehard. 1991. The meaning of focus particles. A comparative perspective. London: Routledge. Krifka, Manfred. 2013. Response particles as propositional anaphors. Semantics and Linguistic Theory (SALT) 23. 1–​18. Kuno, Susumo. 1973. The structure of the Japanese language. Cambridge, MA: MIT Press. Lackner, Andrea. 2007. Turn-​ taking in der Österreichischen Gebärdensprache. Eine Gesprächsanalyse der Salzburger Variante. Graz: University of Graz MA thesis. Lackner, Andrea. 2017. Functions of head and body movements in Austrian Sign Language. Berlin: De Gruyter Mouton & Ishara Press. Levinson, Stephen C. 2010. Questions and responses in Yélî Dnye, the Papuan language of Rossel Island. Journal of Pragmatics 42(10). 2741–​2755. Liddell, Scott. 2003. Grammar, gesture and meaning in American Sign Language. Cambridge: Cambridge University Press. Lillo-​Martin, Diane. 2012. Utterance reports and constructed action in sign and spoken languages. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 365–​387. Berlin: De Gruyter Mouton.

497

498

Elisabeth Volk & Annika Herrmann Loos, Cornelia, Marlijn Meijer, Markus Steinbach, & Sophie Repp. 2019. Responding to negative assertions. A typological perspective on ‘yes’ and ‘no’ in German Sign Language. Talk presented at the Institut Jean Nicod, Paris, France, February 2019. Loos, Cornelia, Markus Steinbach, & Sophie Repp. 2020. Affirming and rejecting assertions in German Sign Language (DGS). Proceedings of Sinn und Bedeutung 24. 1–19. Martínez, Liza B. 1995. Turn-​taking and eye gaze in sign conversations between deaf Filipinos. In Ceil Lucas (ed.), Sociolinguistics in Deaf communities, 272–​306. Washington, DC:  Gallaudet University Press. McCleary, Leland Emerson & Tarcísio de Arantes Leite. 2013. Turn-​taking in Brazilian Sign Language: Evidence from overlap. Journal of Interactional Research in Communication Disorders 4(1). 123–​154. McIlvenny, Paul. 1995. Seeing conversations:  Analyzing sign language talk. In Paul ten Have & George Psathas (eds.), Situated order: Studies in social organization and embodied activities, 129–​ 150. Washington, DC: University Press of America. McKee, Rachel & Sophia Wallingford. 2011. ‘So, well, whatever’: Discourse functions of palm-​up in New Zealand Sign Language. Sign Language & Linguistics 14(2). 213–​247. Mesch, Johanna. 2001. Tactile Sign Language: Turn taking and questions in signed conversations of deaf-​blind people. Hamburg: Signum Verlag. Metzger, Melanie & Ben Bahan. 2001. Discourse analysis. In Ceil Lucas (ed.), The sociolinguistics of sign languages, 112–​144. Cambridge: Cambridge University Press. Papaspyrou, Chrissostomos, Alexander von Meyenn, Michaela Matthaei, & Bettina Herrmann. 2008. Grammatik der Deutschen Gebärdensprache aus der Sicht gehörloser Fachleute. Hamburg: Signum Verlag. Petronio, Karen & Diane Lillo-​Martin. 1997. WH-​movement and the position of Spec-​CP: Evidence from American Sign Language. Language 73(1). 18–​57. Pope, Emily. 1976. Questions and answers in English. The Hague: Mouton. Quer, Josep, Carlo Cecchetto, Caterina Donati, Carlo Geraci, Meltem Kelepir, Roland Pfau, & Markus Steinbach. 2017. SignGram Blueprint:  A guide to sign language grammar writing. Berlin: De Gruyter Mouton (open access at www.degruyter.com). Rooth. Mats. 1992. A theory of focus interpretation. Natural Language Semantics 1. 75–​116. Roush, Daniel. 2007. Indirectness strategies in American Sign Language requests and refusals: Deconstructing the deaf-​as-​direct stereotype. In Melanie Metzger & Earl Fleetwood (eds.), Translation, sociolinguistic, and consumer issues in interpreting, 103–​156. Washington, DC: Gallaudet University Press. Roy, Cynthia B. 1989. Features of discourse in an American Sign Language lecture. In Ceil Lucas (ed.), The sociolinguistics of the Deaf community, 231–​251. San Diego, CA: Academic Press. Salazar, Ventura. 2008. The expression of modality in Spanish Sign Language. Paper presented at the 13th International Conference on Functional Grammar (ICFG13), University of Westminster, London. Sandler, Wendy. 2010. Prosody and syntax in sign languages. Transactions of the Philological Society 108(3). 298–​328. Schaffer, Barbara. 2002. CAN’T:  the negation of modal notions in ASL. Sign Language Studies 3(1).  34–​53. Schaffer, Barbara. 2004. Information ordering and speaker subjectivity: Modality in ASL. Cognitive Linguistics 15(2). 175–195. Schaffer, Barbara & Terry Janzen. 2016. Modality and mood in American Sign Language. In Jan Nuyts & Johan van der Auwera (eds.), The Oxford handbook of modality and mood, 448–​470. Oxford: Oxford University Press. Schiffrin, Deborah. 1987. Discourse markers. Cambridge: Cambridge University Press. Siyavoshi, Sara. 2019. Hands and faces:  The expression of modality in Iranian Sign Language (ZEI). Albuquerque, NM: University of New Mexico PhD dissertation. Stede, Manfred & Birte Schmitz. 2000. Discourse particles and discourse functions. Machine Translation 15. 125–​147. Van Herreweghe, Mieke. 2002. Turn-​taking mechanisms and active participation in meetings with deaf and hearing participants in Flanders. In Ceil Lucas (ed.), Turn-​taking, fingerspelling and contact in signed languages, 73–​103. Washington, DC: Gallaudet University Press.

498

499

Discourse particles van der Kooij, Els, Onno Crasborn, & Johan Ros. 2006. Manual prosodic cues:  Palm-​up and pointing signs. Poster presented at Theoretical Issues in Sign Language Research (TISLR) 9, Florianópolis, Brazil, December 2006. van Loon, Esther. 2012. What’s in the palm of your hands? Discourse functions of palm-​up in Sign Language of the Netherlands. Amsterdam: University of Amsterdam MA thesis. van Loon, Esther, Roland Pfau, & Markus Steinbach. 2014. The grammaticalization of gestures in sign languages. In Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill, & Sedinha Tessendorf (eds.), Body –​language –​communication: An international handbook on multimodality in human interaction (Vol. 2), 2133–​2149. Berlin: De Gruyter Mouton. Volk, Elisabeth. 2017. The integration of palm-​up into sign language grammar:  Structuring discourse in DGS. Talk presented at the workshop ‘The Week of Signs and Gestures’, University of Stuttgart, June 2017. Ward, Nigel & Wataru Tsukahara. 1999. A responsive dialog system. In Yorick Wilks (ed.), Machine conversations, 169–​174. Dordrecht: Kluwer. Waters, Dafydd & Rachel Sutton-​Spence. 2005. Connectives in British Sign Language. Deaf Worlds 21(3).  1–​29. Wilbur, Ronnie B. 1996. Evidence for the function and structure of wh-​clefts in American Sign Language. In William H. Edmondson & Ronnie B. Wilbur (eds.), International review of sign linguistics, 209–​256. Mahwah, NJ: Erlbaum. Wilbur, Ronnie B. 2011. Nonmanuals, semantic operators, domain marking, and the solution to two outstanding puzzles in ASL. Sign Language & Linguistics 14(1). 148–​178. Wilbur Ronnie B. & Cynthia Patschke. 1998. Body leans and the marking of contrast in American Sign Language. Journal of Pragmatics 30. 275–​303. Wilcox, Phyllis. 1996. Deontic and epistemic modals in ASL:  A discourse analysis. In Adele Goldberg (ed.), Conceptual structure, discourse and language, 481–​492. Cambridge: Cambridge University Press. Wilcox, Sherman & Phyllis Wilcox. 1995. The gestural expression of modality in ASL. In Joan Bybee & Suzanne Fleischmann (eds.), Modality in grammar and discourse, 135–​162. Amsterdam: John Benjamins. Zeshan, Ulrike (ed.). 2006. Interrogative and negative constructions in sign languages. Nijmegen: Ishara Press. Zimmermann, Malte. 2011. Discourse particles. In Klaus von Heusinger, Claudia Maienborn, & Paul Portner (eds.), Semantics. An international handbook of natural language meaning, 2011–​ 2038. Berlin, New York: De Gruyter.

499

500

23 LOGICAL VISIBILITY AND ICONICITY IN SIGN LANGUAGE SEMANTICS Theoretical perspectives Philippe Schlenker 23.1  Introduction We argue that sign languages have a crucial role to play in the foundations of semantics, for two reasons. First, in some cases sign languages provide overt evidence on crucial aspects of the Logical Form of sentences, ones that must be inferred indirectly in spoken language (= ‘Logical Visibility’). Second, along one dimension, sign languages are strictly more expressive than spoken languages because iconic phenomena can be found at their logical core (= ‘Iconicity’). From this perspective, spoken language semantics is along some dimensions a ‘simplified’ version of sign language semantics, one from which the iconic component has been mostly lost (for background, see also Zucchi (2012)). While one may conclude that the full extent of Universal Semantics can only be studied in sign languages, an alternative is that spoken languages have comparable expressive resources, but only when co-​speech gestures are taken into account –​hence the need for a precise semantics for gestures as well. We state our hypothesis of Logical Visibility in 1 (see also Lillo-​Martin & Klima (1990) and Wilbur (2003, 2008), among others). (1)  Hypothesis 1: Logical Visibility1 Sign languages can make overt some mechanisms which (i) have been posited in the analysis of the Logical Form of spoken language sentences, but (ii) are not morphologically realized in spoken languages. Examples will involve in particular (i) covert variables that have been posited to disambiguate relations of binding in spoken language, and are realized as loci in sign languages; and (ii) covert operations of context shift, which have been argued to be useful to analyze the behavior of indexicals in some spoken languages, and are realized as role shift in sign languages.2

500

501

Logical visibility and iconicity

We state our hypothesis about Iconicity in (2) (see Schlenker (2018: 129); for reference to iconic views, see Cuxac (1999), Taub (2001), Liddell (2003), Kegl (2004) and Cuxac & Sallandre (2007)). (2)  Hypothesis 2: Iconicity Sign languages make use of expressions that simultaneously have a logical/​ grammatical function and an iconic semantics, defined as a semantics in which some geometric properties of signs must be preserved by the interpretation function. Examples will primarily involve sign language loci, which can simultaneously fulfill the role of variables and display pictorial/​diagrammatic properties. Here too, we will not claim that iconic effects do not exist in spoken languages, but we will suggest that the richness of iconicity in sign languages and its seamless integration to the logical engine of the language raise particular challenges.

23.2  Logical Visibility I: visible variables We start our discussion of Logical Visibility with sign language loci, which were analyzed by several researchers (starting with Lillo-​Martin & Klima (1990)) as the overt manifestation of logical variables. We lay out this hypothesis in Section 23.2.1, illustrate it in the case of individual-​referring loci in Section 23.2.2, trace some of its consequences for debates about the existence of time and world variables in Section 23.2.3, and then step back in Section 23.2.4 to ask how strong the analogy between loci and variables really is. (We leave out ‘dynamic variables’ from the present discussion; they are discussed at greater length in Schlenker (2011b, 2018).)

23.2.1  Variable Visibility Sentences such as (3a) and (4a) can be read in three ways, depending on whether the embedded pronoun is understood to depend on the subject, on the object, or to be deictic. (3) a. Sarkozyi told Obamak that hei/​k/​m would be re-​elected. b. Sarkozy λi Obama λk ti told tk that hei/​k/​m would be re-​elected. (4) a. [A representative]i told [a senator]k that hei/​k/​m would be re-​elected. b. [a representative]i λi [a senator]k λk ti told tk that hei/​k/​m be re-​elected. These ambiguities have been analyzed in great detail in frameworks that posit that pronouns have the semantics of variables, which may be bound by a quantifier, or left free –​in which case they receive their value from an assignment function provided by the context. For instance, in the textbook analysis of Heim & Kratzer (1998), one way to represent the ambiguity of (3a) is through the representation in (3b), where a bona fide Logical Form would be obtained by choosing the index i, k, or m for the pronoun he (since the subject and object are referring expressions, there are several alternative

501

502

Philippe Schlenker

ways to represent the ambiguity). (4b) summarizes three possible Logical Forms of (4a) within the same framework, depending on whether he is given the index i, k, or m. Sometimes these representations can get quite complex, for instance to capture the fact that plural pronouns may be simultaneously bound by several quantifiers, as in (the relevant reading of) (5a), represented as in (5b). (5) a. b.

[A representative]i told [a senator]k that theyi, k would (both) be re-​elected. [a representative]i λi [a senator]k λk ti told tk that theyi+k would be re-​elected.

In this case, it is essential on the relevant reading that they should be simultaneously dependent on a representative and on a senator, hence the ‘sum’ index i+k that appears on they in (5b). In this section, we survey recent results that suggest that sign languages display an overt version of something close to the indices of (3)–​(5), and that this fact can be used to revisit some foundational questions in semantics. However, it will prove useful to distinguish between two versions of this hypothesis of ‘Variable Visibility’ (Schlenker 2018). According to the weak version (6a), it is possible to associate both to a pronoun and to its antecedent a symbol (namely a locus) that marks their dependency, and to associate to different deictic pronouns different symbols if they denote different objects. According to the strong version (6b), the symbols in question –​loci –​really do display the behavior of variables –​which, as we will see below, is a strictly stronger (and possibly overly strong) claim. (6) Variable Visibility a. Weak version In sign languages, a given locus in signing space can be associated both to a pronoun and to its antecedent to mark their dependency. Furthermore, deictic pronouns that refer to different objects may be associated to different loci. b. Strong version In sign languages, some uses of loci display the behavior of logical variables, both in their bound and in their free uses.

23.2.2  Loci as variables3 As mentioned, Lillo-​Martin & Klima (1990) argued that logical variables or indices, which are usually covert in spoken languages, can be overtly realized in sign language by positions in signing space or ‘loci’. In case a pronoun is used deictically or indexically, its locus usually corresponds to the actual position of its denotation, be it the speaker, the addressee, or some third person (e.g., Meier 2012). If the pronoun is used anaphorically, the antecedent typically establishes a locus, which is then ‘indexed’ (= pointed at) by the pronoun. In the American Sign Language (ASL) example in (7a), the sign names BUSH and OBAMA establish loci by being signed in different positions; in (7b), the antecedent noun phrases are accompanied with pointing signs that establish the relevant loci. In quantificational examples, indexing disambiguates among readings, as in (8) from French Sign Language (LSF). (Note that throughout, translations are followed by the references of the target videos.)

502

503

Logical visibility and iconicity

(7) a.

b.

(8)

IX-​1 KNOW BUSHa IX-​1 KNOW OBAMAb. IX-​b SMART BUT IX-​a NOT SMART.

‘I know Bush and I know Obama. He [= Obama] is smart but he [= Bush] is not smart.’ (ASL; 4, 179) IX-​1 KNOW PAST PRESIDENT IX-​a IX-​1 KNOW NOW PRESIDENT IX-​b. IX-​b SMART BUT IX-​a NOT SMART. ‘I know the former President and I know the current President. He [= the current President] is smart but he [= the former President] is not smart.’ (ASL; 4, 179)                     (ASL, Schlenker 2011b: 350)

DEPUTYb SENATORa CLb-​CLa IX-​b

a-​TELL-​b IX-​a /​ IX-​b WIN ELECTION ‘An MPb told a senatora that hea /​heb (= the deputy) would win the election.’ (LSF; 4, 233)  (LSF, Schlenker 2016: 1068)

A crucial property of sign language anaphora is that loci can be created ‘on the fly’ in many different positions of signing space, and that there is no clear upper bound on the number of loci that can simultaneously be used, besides limitations of performance (since signers need to be able to distinguish loci from each other, and to keep their position and denotation in memory). Now there are spoken languages in which third-​person reference can be disambiguated by grammatical means, for instance by way of a distinction between proximate and obviative marking (e.g., in Algonquian languages, see Hockett (1966)) or in switch-​reference systems (e.g., Finer 1985). But these only make it possible to distinguish among a small number of third-​person elements –​typically two or three (for instance, ‘proximate’, ‘obviative’, and sometimes ‘double obviative’ in obviative systems). By contrast, there seems to be an unlimited number of potential distinctions in sign language, and in this case the signed modality –​and specifically the fact that loci can be realized as points in space –​seems to play a crucial role in Variable Visibility. As is well-​known, when a pronoun denotes a plurality it may be realized by an ‘arc’ pointing sign, which thus indexes a semi-​circular area; and there are also dual and even trial pronouns when the pronoun denotes two or three individuals. Strikingly, these pronouns can simultaneously index several loci in cases corresponding to the ‘split antecedents’ discussed in (5). Thus, in (9), the dual pronoun THE-​TWO-​a,b is realized as a horizontal 2 that goes back and forth between the two loci; and it can be checked that this is no accident: if the position of the loci is modified, the movement that realizes THE-​TWO changes accordingly. (9)

IX-​1 HAVE TWO TICKET. IF 1-​GIVE JOHNa BILLb, THE-​TWO-​a,b HAPPY.

‘I have two tickets. If I give them to John and Bill, they will be happy.’ (ASL; 2, 180)             (ASL, Schlenker 2018: 137) More complex cases can easily be constructed, with trial or plural pronouns indexing more than two loci. Because there appears to be an arbitrary number of possible loci, it was suggested that these do not spell out morpho-​syntactic features, but rather are the overt realization of formal indices (Lillo-​Martin & Klima 1990; Sandler & Lillo-​Martin 2006; we revisit this point in Section 23.2.4). Importantly, there are some striking similarities between sign

503

504

Philippe Schlenker

language pronouns and their spoken counterparts, which makes it desirable to offer a unified theory.4 The first similarity is that sign language pronouns obey at least some of the syntactic constraints on binding studied in spoken language syntax. For instance, versions of the following rules have been described for ASL (Lillo-​Martin 1991; Sandler & Lillo-​Martin 2006; Koulidobrova 2011; Schlenker & Mathur 2013): Condition A, which mandates that a reflexive pronoun such as himself co-​refer with a local antecedent (e.g., Hei admires himselfi); Condition B, which prohibits a non-​reflexive pronoun from overlapping in reference with a local antecedent (hence the deviance of #Hei admires himi, understood with coreference); and Strong Crossover, which prohibits a quantificational expression from moving to the left of a coindexed pronoun that c-​commands its base position (hence the deviance of #[Which man]i does hei think I will hire ti, where ti is the base position of the interrogative expression, and hei is coindexed with it). The second similarity is that, in simple cases at least, the same ambiguity between strict and bound variable readings is found in both modalities (see Lillo-​Martin & Sandler (2006); further cases will be discussed below); this is illustrated in 10, which has the same two readings as in English:  the third person mentioned can be understood to like his mother, or the speaker’s mother.5 (10)

IX-​1 POSS-​1 MOTHER LIKE. IX-​a SAME-​1,a.

Ambiguous: I like my mother. He does too [= like my /​like his mother] (ASL; 1, 108)               (ASL, Schlenker et al. 2013: 93)

23.2.3  Individual, time and world variables6 We turn to the debate concerning the existence of an abstract anaphoric mechanism that applies in similar fashion to the nominal, temporal, and modal domains. In a nutshell, we suggest that ASL loci have all three uses, and thus provide an argument in favor of the existence of such an abstract system. In what follows, temporal and modal uses of loci have roughly the same meaning as the English word then, which has both temporal and modal uses; the crucial difference is that in ASL the very same word can have nominal, temporal and modal uses (and locative uses as well, as we will see shortly); and that it arguably ‘wears its indices on its sleeves’ because of the variable-​like uses of loci. The point is by no means trivial. In the tradition of modal and tense logic, it was thought that expressions are only implicitly evaluated with respect to times and possible worlds: language was thought to be endowed with variables denoting individuals, but not with variables denoting times or possible worlds. By contrast, several researchers argued after Partee (1973) and Stone (1997) that natural language has time-​and world-​denoting variables –​albeit ones that usually manifest themselves as affixes (tense, mood) rather than as full-​fledged pronominal forms. Here we make the simple suggestion that ASL pronouns in their various forms can have nominal, temporal, modal and also locative uses. The full argument has three steps: 1. As we discussed above, nominal anaphora in sign language usually involves (i)  the establishment of positions in signing space, called ‘loci’, for antecedents; (ii) pointing

504

505

Logical visibility and iconicity

towards these loci to express anaphora. Both properties are also found in the temporal and modal domains. 2. This observation does not just hold of the singular index; temporal uses of dual, trial, and plural pronouns can be found as well. The phenomenon is thus general, and it is not plausible to posit that it is an accident that all these morphologically distinct pronouns simultaneously have nominal, temporal, and modal uses:  indexing per se seems to have all these uses. 3. Temporal and modal anaphora in ASL can give rise to patterns of inference that are characteristic of so-​called ‘donkey’ pronouns (i.e., pronouns that depend on existential antecedents without being in their syntactic scope). Here, we just illustrate the first step of the argument and refer the reader to Schlenker (2013a) for further details. Let us start with temporal indexing: It can be seen in (11) that the same possibilities are open for temporal anaphora as were displayed for nominal anaphora in (7) and (8): antecedents establish loci; pronominal forms retrieve them by way of pointing (‘re’ = raised eyebrows).7 (11) a. [YESTERDAY RAIN]a [DAY-​BEFORE-​YESTERDAY SNOW]b.   re

  re

IX-​b IX-​1 HAPPY.

IX-​a IX-​1 NOT HAPPY. ‘Yesterday it rained and the day before yesterday it snowed. Then [= the day before yesterday] I was happy but then [= yesterday] I wasn’t happy.’ (ASL; 4, 181) b. [WHILE RAIN]a TEND WARM. [WHILE SNOW]b TEND COLD.

  re

  re

IX-​b IX-​1 HAPPY.

IX-​a IX-​1 NOT HAPPY. ‘When it rains it is warm but when it snows it is cold. Then [= when it snows] I am happy but then [= when it rains] I am not happy.’ (ASL; 4, 182) c. Context: I went skiing during the holidays.                    re       re [SOMETIMES RAIN]a [SOMETIMES SNOW]b.  IX-​b IX-​1 HAPPY.  IX-​a IX-​1 NOT HAPPY. ‘Sometimes it rained and sometimes it snowed. Then [= when it snowed] I was happy but then [= when it rained] I wasn’t happy].’ (ASL; 4, 195) (ASL, Schlenker 2013a: 213f)

As can be seen, temporal indexicals, when-​clauses (which are semantically similar to definite descriptions of times), and existential time quantifiers (sometimes) can all give rise to patterns of anaphora involving the same pronoun IX as in the nominal case (the existential cases involves an instance of ‘dynamic binding’ whose nominal counterpart is discussed at greater length in Schlenker (2011b)). Importantly, loci appear in the usual signing space, which is in front of the signer. Although the words for tomorrow and yesterday are signed on the ‘time line’, which is on a sagittal plane (tomorrow is signed in the front, yesterday towards the back), no pointing occurs towards it, at least in this case (but see Emmorey (2002) for discussion). Let us turn to modal indexing. While there are no clear world indexicals or world proper names, modals such as can are standardly analyzed as existential quantifiers over possible worlds; and if-​clauses have occasionally been treated as definite descriptions of possible worlds (e.g., Bittner 2001; Schlenker 2004; Bhatt & Pancheva 2006). Both cases can give rise to locus indexing in ASL, as shown in (12). 505

506

Philippe Schlenker

(12)  a. 

TOMORROW [POSSIBLE RAIN]a [POSSIBLE SNOW]b.



re



IX-​b IX-​1 HAPPY.

b.

re

IX-​a IX-​1 NOT HAPPY.

‘Tomorrow it might rain and it might snow. Then [= if it snows] I’ll be happy. Then [= if it rains] I won’t be happy.’ (ASL; 4, 183) [IF RAIN TOMORROW]a WILL WARM. [IF SNOW TOMORROW]b WILL COLD.  

re

  re

IX-​b IX-​1 HAPPY.

IX-​a IX-​1 NOT HAPPY. ‘If it rains tomorrow it will be warm, but if it snows tomorrow it will be cold. Then [= if it snows] I’ll be happy. Then [= if it rains] I won’t be happy.’ (ASL; 4, 183)                 (ASL, Schlenker 2013a: 215f)

We conclude that explicit anaphoric reference to times and possible worlds is possible in ASL –​though our analysis leaves it entirely open whether times and worlds should be primitive types of entities in our ontology or should be treated as varieties of a more general category of situations.

23.2.4  Variables or features –​or both?8 We take examples such as (7)–​(9), as well as much of the foregoing discussion, to have established the plausibility of the Weak Hypothesis of Variable Visibility in (6a): a given locus may be associated both with a pronoun and to its antecedent to mark their dependency; furthermore, deictic pronouns that refer to different objects may be associated with different loci. But this does not prove that loci share in all respects the behavior of logical variables, and thus these facts do not suffice to establish the Strong Hypothesis of Variability in (6b). This stronger hypothesis was recently challenged by Kuhn (2015), who argues that loci should be seen as features akin to person and gender features, rather than as variables. On a positive level, Kuhn argues that the disambiguating effect of loci in (7) to (9) can be explained if loci are features that pronouns inherit from their antecedents, just as is the case of gender features in spoken languages (and it is uncontroversial that these are not variables). On a negative level, Kuhn argues that treating loci as variables predicts that they should obey two constraints that are in fact refuted by his ASL data. First, a variable is constrained to depend on the structurally closest operator it is coindexed with. Thus, the boxed variable x1 in (13a) cannot be semantically dependent on the universal quantifier ∀x1 because of the intervening quantifier ∃x1 –​by contrast with (13b), where the intervening quantifier carries a different index. For the same reason, the boxed variable in both formulas cannot be free and refer (deictically, in linguistic parlance) to a fixed individual. (13)  Variable capture in First-​Order Logic: x1 can be bound by ∀x1 in b. but not in a. a.  ∀x1 ∃x1 … P x1 … b. ∀x1 ∃x2 … P x1 … 506

507

Logical visibility and iconicity

By the same token, the two occurrences of the variable x1 in (14) must have the same semantic value –​in particular, if no quantifier binding x1 appears at the beginning of the formula, both occurrences will be free and will denote a fixed individual. (14)  Variable re-​use in First-​Order Logic: The two occurrences of x1 must denote the same object. … Px1 & Qx1 … Kuhn (2015) argues that both predictions are incorrect: first, expected cases of variable capture fail to arise under the quantificational adverb only; second, multiple occurrences of the same locus may refer to different individuals. For brevity, we only discuss the second problem here (see Schlenker (2016) for the first problem). Kuhn shows that in (15), a single locus is assigned to John and Mary, and another locus is assigned to Bill and Suzy. As a result, the boxed occurrences IX-​a and IX-​b refer to John and Mary respectively, while the underlined pronouns ix-​a and ix-​b refer to Mary and Suzy. (Note that in this example, and some of the examples to follow, the number preceding the gloss (‘6’ in this case) represents the (average) rating of the example on a 7-​point scale by one or more informants, with 7 being the highest rating.) (15) 6

EVERY-​DAY, JOHNa TELL MARYa IX-​a LOVE ix-​a. BILLb NEVER TELL SUZYb IX-​b LOVE ix-​b.

‘Every day, Johni tells Maryj that hei loves herj. Billk never tells Suzyl that hek loves herl.’ 

 (ASL, Kuhn 2016: 464)

As Kuhn observes, this example is problematic for the variable-​based view. The initial association of the proper name JOHN with variable a should force a to refer to John; but then how can a also refer in the same clause, and without any intervening binder, to Mary? By contrast, these data are unproblematic for the feature-​based analysis of loci: just like two noun phrases may bear the same feminine gender features while denoting different individuals, so it is with loci-​as-​features. Locus re-​use is certainly limited by pragmatic or other constraints  –​a more standard strategy is to assign one locus per individual. Kuhn’s argument is really an existential proof that in some cases, loci display a behavior which is incompatible with the view that they spell out variables. Two directions have been explored to solve these problems. On the one hand, Kuhn treats loci as features which are not interpreted (so that neither the problem of variable capture nor the problem of variable re-​use can arise in the first place), but are inherited by a mechanisms of morpho-​syntactic agreement; this allows him to provide a variable-​free treatment of loci, which is developed in great detail in Kuhn (2016) (as Kuhn observes, the fact that loci are not variables does not show that there are no variables in the relevant Logical Forms, just that loci are not them; giving a variable-​free treatment of these data is thus a possibility but certainly not a necessity). Schlenker (2016), on the other hand, suggests instead that loci may both display the behavior of variables and of features  –​they are thus ‘featural variables’. Specifically, when they are interpreted, their semantics is given by an assignment function, just like that of standard indices. But they may be disregarded in precisely the environments in which person or gender features can be disregarded. Furthermore, in many environments, loci constrain the value of covert variables.

507

508

Philippe Schlenker

23.3  Logical Visibility II: beyond variables In this section, we turn to further cases  –​not related to loci  –​in which sign language makes overt certain parts of Logical Forms that are usually covert in spoken language. The first case involves context-​shifting operators, which were argued in semantic research to be active but covert in spoken language (e.g., Schlenker 2003; Anand & Nevins 2004; Anand 2006). Following Quer (2005), we propose that context shift can be realized overtly in sign language, by way of an operation called ‘role shift’. We then move to the aspectual domain and summarize results which suggest that some primitive categories in the representation of aspectual classes are made visible in sign language but are usually covert in spoken language (the ‘Event Visibility Hypothesis’ of Wilbur (2003)).

23.3.1  Role shift as visible context shift9 23.3.1.1  Basic data Two strands of research on context-​dependency have come together in recent years. In the semantics of spoken languages, considerable attention has been devoted to the phenomenon of context shift, as evidenced by the behavior of indexicals. While these were traditionally thought to depend rigidly on the context of the actual speech act (Kaplan 1989), it turned out that there are languages and constructions in which this is not so: some attitude operators appear to be able to ‘shift the context of evaluation’ of some or all indexicals (e.g., Schlenker 1999, 2003, 2011c; Anand & Nevins 2004; Anand 2006). In research on sign languages, there has been a long-​standing interest in role shift, an overt operation (often marked by body shift and/​or eyegaze shift) by which the signer signals that she adopts the perspective of another individual (e.g., Padden 1986; Lillo-​ Martin 1995; Sandler & Lillo-​Martin 2006; see also Steinbach, Chapter 16). Role shift comes in two varieties: it may be used to report an individual’s speech or thought –​ henceforth ‘Attitude Role Shift’. Or it may be used to report in a particularly vivid way an individual’s actions –​henceforth ‘Action Role Shift’ (a more traditional term in sign language research is ‘Constructed Action’). Quer (2005) connected these two strands of research by proposing that Attitude Role Shift is overt context shift. His main motivation was that some or all indexicals that appear in its scope acquire a shifted interpretation. For such an argument to be cogent, however, an alternative analysis must be excluded, one according to which the role-​shifted clause is simply quoted –​for quoted clauses are arguably mentioned rather than used, which obviates the need to evaluate their content relative to a shifted context.10 Quer’s argument is in two steps (2005, 2013). First, he shows that some indexicals in Attitude Role Shift in Catalan Sign Language (LSC) have a shifted interpretation, i.e., are intuitively evaluated with respect to the context of the reported speech act. Second, he shows that in some of these cases clausal quotation cannot account for the data because other indexicals can be evaluated with respect to the context of the actual speech act. This pattern is illustrated in (16), where the first-​person pronoun IX-​1 is evaluated with respect to the reported context (and thus refers to Joan), while HERE is evaluated with respect to the actual context.

508

509

Logical visibility and iconicity

                

(16)

  t

IXa MADRIDm MOMENT JOANi

            

             RS-​i

THINK IX-​1i STUDY FINISH HEREb

‘When he was in Madrid, Joan thought he would finish his study here (in Barcelona).’ (LSC, Quer 2005: 154) As emphasized by Quer (2013), it is also possible to understand HERE as being shifted; but the reading with a ‘Mixing of Perspectives’ found in (16) is crucial to argue that there is context shift rather than standard quotation.11

23.3.1.2  Typology: ‘Mixing of Perspectives’ vs. ‘Shift Together’ In order to account for his data, Quer (2005) makes use of a framework developed in Schlenker (2003), in which attitude operators could bind object-​language context variables, with the result that a given embedded clause could include both shifted and shifted indexicals. In Schlenker (2003), the argument for this possibility of a ‘Mixing of Perspectives’ came from preliminary Amharic data, as well as data from Russian. Schematically, Schlenker (2003) posited Logical Forms such as those in (17), where an attitude verb binds a context variable c, while a distinguished variable c* denoting the actual speech act remains available for all indexicals. As a result, when two indexicals indexical1 and indexical2 appear in the scope of an attitude verb, they may be evaluated with respect to different context variables, as is illustrated in (17). (17)  ‘Mixing of Perspectives’ in Schlenker (2003): … attitude-​verbc … indexical1(c) … indexical2(c*) … … attitude-​verbc … indexical1(c*) … indexical2(c) … … attitude-​verbc … indexical1(c) … indexical2(c) … … attitude-​verbc … indexical1(c*) … indexical2(c*) … While agreeing that some attitude verbs are context shifters, Anand & Nevins (2004) and Anand (2006) argued that ‘Mixing of Perspectives’ is undesirable. Specifically, they showed that in Zazaki, an Indo-​Aryan language of Turkey, if an indexical embedded under an attitude verb receives a shifted reading, so do all other indexicals that are found in the same clause –​a constraint they labeled ‘Shift Together’: (18) ‘Shift Together’ (Anand & Nevins 2004): If an indexical is shifted in the scope of a modal operator, all other indexicals in the same clause must be shifted as well. … attitude verb … δ [… shifted indexical1 … . shifted indexical2 …] For Anand & Nevins (2004) and Anand (2006), a covert context-​shifting operator is optionally present under the verb say in Zazaki, but crucially it does not bind context variables, and just manipulates an implicit context parameter. When the operator is absent, the embedded clause behaves like an English clause in standard indirect discourse. When the context-​shifting operator is present, it shifts the context of evaluation of all indexicals within its scope –​hence the fact that we cannot ‘mix perspectives’ within the embedded clause. This is schematically represented in (19): 509

510

Philippe Schlenker

(19) ‘Shift Together’ in Anand & Nevins (2004) and Anand (2006): … attitude-​verb … indexical1 … indexical2 … => neither indexical is shifted … attitude-​verb Op … indexical1 … indexical2 … => both indexicals are shifted While the initial debate was framed as a choice between two competing theories of context shift, an alternative possibility is that different context-​shifting constructions pattern differently in this connection (e.g., with Zazaki going with ‘Shift Together’, and Russian and Amharic with ‘Mixing of Perspectives’). The sign language data that have been explored thus far argue for this ecumenical view: some languages allow for ‘Mixing of Perspectives’, while others obey ‘Shift Together’. Arguing for ‘Mixing of Perspectives’, the data from LSC in (16) mirror the Russian data in that two indexicals that appear in the same clause may be evaluated with respect to different contexts. Similarly, German Sign Language (DGS) allows for ‘Mixing of Perspectives’, with a shifted indexical co-​ existing with an unshifted one in the same clause (Herrmann & Steinbach 2012; Hübl & Steinbach 2012; Quer 2013). Arguing for ‘Shift Together’, Schlenker (2017b, 2017c) shows that ASL and LSF replicate the Zazaki pattern:  under role shift, all indexicals are obligatorily shifted. A case in point is displayed in (20), where the first-​person pronoun IX-​1 and the adverb HERE are both signed under role shift, and both are obligatorily interpreted with a shifted meaning. (20) Context: the speaker is in NYC                                           RSa 7 IN LA WHO IX-​a JOHNa SAY IX-​1 WILL MEET HERE WHO ‘In LA, who did John say he would meet there [in LA]?’ (ASL; 6, 293; 6, 316) Informant JL (on a video on which he signed the sentence [ASL, 6, 316]): 7, HERE = LA Informant 2 (on a video on which he signed the sentence with IX-​b replacing IN [ASL; 6, 293]): 7, HERE = preferably LA [ASL; 6, 294–​295].12                       (ASL, Schlenker 2017b) In sum, given the available data, it seems that the typology of context-​shifting operations in sign language mirrors that found in spoken language: some languages/​constructions obey ‘Shift Together’, whereas others allow for ‘Mixing of Perspectives’. The difference between the two modalities is, of course, that in sign language role shift is overtly realized.

23.3.1.3  Further complexities While this basic picture of role shift as overt context shift is appealingly simple, it abstracts away from important complexities. First, role shift does not just occur in attitude reports (= ‘Attitude Role Shift’), but it can also be used in action reports, especially to display in a particularly vivid fashion some parts of the action through iconic means (= ‘Action Role Shift’). Attitude Role Shift can target an entire clause, as well as any indexicals within it (optionally or obligatorily depending on the language). By contrast, Action Role Shift is more constrained; depending on the author, in ASL it is believed to just target verbs (Davidson 2015), or possibly larger constituents, but if so, only ones that contain no indexicals or first-​person 510

511

Logical visibility and iconicity

agreement markers (Schlenker 2017b). Be that as it may, any context-​shifting analysis of role shift must be extended in non-​trivial ways to account for Action Role Shift (for a proposal, see Schlenker (2017b)). Second, it is not clear that ASL and LSF role shift cannot be analyzed in terms of quotation. Indexicals will not help, since the data mentioned above seem to argue that all indexicals are evaluated with respect to the same perspectival point, which is also what one would expect in standard quotation. In spoken languages, a standard strategy to disprove a quotational analysis of a clause under say is to establish a grammatical dependency between the embedded clause and the matrix clause  –​with the assumption that ‘grammatical dependencies do not cross quotation marks’ (presumably because quoted material is mentioned, not used). Thus, quotation is impossible in (21) and (22) because of a grammatical dependency between the quoted clause and the matrix clause, involving a moved interrogative expression (‘wh-​extraction’) in (21) and a dependency between a Negative Polarity Item (NPI) and its negative licenser in (22). (21) * What did Mary say: ‘I understand _​’? (22) * Mary didn’t say: ‘I understand any chemistry’. Now in the data reported in Schlenker (2017b, 2017c), ASL role shift allows for wh-​ extraction out of role-​shifted clauses, but so does another construction that is plausibly quotational (because it involves a sign for quotation at the beginning of a non-​role-​ shifted clause). For this reason, the evidence that the role-​shifted clause does not involve quotation is weak –​maybe quotation just does allow for wh-​extraction in our ASL data, for unknown reasons. Furthermore, another standard test of indirect discourse fails; it involves the licensing of a Negative Polarity Item, ANY, by a negative element found in the matrix clause. When the embedded clause is in standard indirect discourse, ‘any’ can be licensed by a matrix negation both in the English sentence in (23a), and in an analogous sentence in ASL. When the clause is quoted, as in the English example in (23b), ‘any’ cannot be licensed from by a negation in the matrix clause. Crucially, an analogous sentence with role shift in ASL displays a pattern similar to (23b), which suggests that Attitude Role Shift does have a quotational component. (23) a. John never said he showed Mary any kindness. b. # John never said: ‘I showed Mary any kindness’. In addition, in LSF wh-​extraction out of role-​shifted clause fails, just as it fails out of a quoted sentence in the English data in (21a); this too suggests that Attitude Role Shift has a quotational component. Thus, in ASL and LSF, the argument that role shift involves context shift rather than quotation depends rather heavily on the existence of Action Role Shift, which could not be analyzed in quotational terms (because it is used to report actions rather than thought-​or speech-​acts). By contrast, in LSC and DGS, the argument against a quotational analysis is fairly strong due to the ability of role-​shifted clauses to mix perspectives. Finally, Schlenker (2017b, 2017c), following much of the literature, argues that role shift comes with a requirement that some elements be interpreted iconically (and suggests that the quotational effects discussed in (ii) above are a special case of iconicity). We come back to this point in Section 23.5.3. 511

512

Philippe Schlenker

23.3.2  Aspect: visible event decomposition Cases of Visibility are not limited to the domains of reference (as in Section 23.2) and context-​dependency (as in Section 23.3.1). Wilbur (2003) argued that sign language makes visible certain parts of the logical structure of verbs –​and coined the term ‘Event Visibility’ to label her main hypothesis. To introduce it, a bit of background is needed. Semanticists traditionally classify event descriptions as telic if they apply to events that have a natural endpoint determined by that description, and they call them atelic otherwise. Ann spotted Mary and Ann understood have such a natural endpoint –​the point at which Ann spotted Mary and came to an understanding, respectively; Ann knew Mary and Ann reflected lack such a natural endpoint and are thus atelic. Standardly (e.g., Rothstein 2004), a temporal modifier of the form in α time modifies telic VPs while for α time modifies atelic VPs (e.g., Ann reflected for a second vs. Ann understood in a second). Now Wilbur’s hypothesis is that the distinction between telic and atelic predicates is often realized overtly in ASL by:  (i) change of handshape aperture (open/​closed or closed/​ open); (ii) change of handshape orientation; and (iii) abrupt stop at a location in space or contact with a body part (Wilbur & Malaia 2008). On a theoretical level, Wilbur (2008) posits that in ASL and other sign languages, telicity is overtly marked by the presence of an affix dubbed EndState, which “means that an event has a final state and is telic. Its phonological form is ‘a rapid deceleration of the movement to a complete stop’ ” (Wilbur 2008: 232) which can come in several varieties, as illustrated in (24). Remarkably, then, Wilbur’s findings suggest that sign language can articulate overtly some grammatically relevant aspects of event decomposition. In Section 23.5.2, we will revisit Wilbur’s Event Visibility Hypothesis, asking whether it might not follow from a more general property of structural event iconicity. (24)  Examples of movements in signs denoting telic events (Wilbur 2008: 232; © Signum Press, reprinted with permission)

23.4  Iconicity I: iconic variables 23.4.1  Introduction In the cases we have discussed up to this point, sign language makes some aspects of the Logical Forms of sentences more transparent than they are in spoken language. In this section, we turn to cases in which sign language has greater expressive power than spoken languages because it makes greater use of iconic resources. There are certainly

512

513

Logical visibility and iconicity

iconic phenomena in spoken language, for instance in the sentence ‘The talk was loooooong’ (see Okrent 2002): the excessive duration of the vowel gives a vivid idea of the real or experienced duration of the talk (as one might expect, saying that ‘the talk was shooooooort’ would yield a rather odd effect). But sign languages make far more systematic use of iconicity, presumably because their depictive resources are much greater than those of spoken languages. While one might initially seek to separate neatly between a ‘grammatical/​logical’ and an ‘iconic’ component in sign language, we will see that the two are closely intertwined: iconic phenomena are found at the core of the logical engine of sign language. In particular, we will revisit in detail the case of sign language loci, and we will argue that in some cases they are simultaneously logical variables and simplified pictures of what they denote.

23.4.2  Embedded loci: plurals13 The simplest instance of an iconic constraint concerns plural ASL and LSF loci, which are usually realized as (semi-​)circular areas. These can be embedded within each other, and we hypothesize that this gives rise to cases of structural iconicity, whereby topological relations of inclusion and relative complementation in signing space are mapped into mereological analogues in the space of loci denotations. Our initial focus is on the anaphoric possibilities made available in English by the sentence Most students came to class. Recent research has argued that such a sentence makes available two discourse referents for further anaphoric uptake: one corresponding to the maximal set of students, as illustrated in (25b) (‘maximal set anaphora’); and one for the entire set of students, as illustrated in (25c) (‘restrictor set anaphora’). (25) a.  Complement set anaphora: #Most students came to class. They stayed home instead. b. Maximal set anaphora: Most students came to class, and they asked good questions. c. Restrictor set anaphora: Most students came to class. They are a serious group. Crucially, however, no discourse referent is made available for the set of students that did not come to class (‘complement set anaphora’, as this is the complement of the maximal set within the restrictor set); this is what explains the deviance of (25a). This anaphoric pattern, whereby they in (25a) is read as referring to the students that did not come, is at best limited when the initial quantifier is few, and nearly impossible with most. Nouwen (2003) argues that when available, complement set anaphora involves inferred discourse referents: no grammatical mechanism makes available a discourse referent denoting the complement set –​here: the set of students who didn’t come. On the basis of ASL and LSF data, Schlenker et al. (2013) made two main observations. Observation I. When a default plural locus is used in ASL, data similar to (25) can be replicated –​e.g., complement set anaphora with most is quite degraded. This is illustrated in (26), with average judgments (per trial) on a 7-​point scale, with a total of five trials and three informants.

513

514

Philippe Schlenker

(26) a.

a-​CAME CLASS. a-​STAY HOME. b. POSS-​1 STUDENT MOST a-​CAME CLASS. 2.8  IX-​arc-​a a-​STAY HOME. Intended: ‘Few/​Most of my students came to class. They [the students that didn’t come] stayed home.’          (ASL, Schlenker et al. 2013: 98) POSS-​1 STUDENT FEW

3.6 

IX-​arc-​a

Observation II. When embedded loci are used, the effect is circumvented: one large locus (written as ab, but signed as a single circular locus) denotes the set of all students; a sublocus (= a) denotes the set of students who came; and a complement locus (= b) thereby becomes available, denoting the set of students who did not come, as illustrated in (27). (27)

POSS-​1 STUDENT IX-​arc-​ab MOST IX-​arc-​a

a-​CAME CLASS. ‘Most of my students came to class.’ a. 7 IX-​arc-​b b-​STAY HOME ‘They stayed home.’ b. 7 IX-​arc-​a a-​ASK-​1 GOOD QUESTION ‘They asked me good questions.’ c. 7 IX-​arc-​ab SERIOUS CLASS ‘They are a serious class.’ (ASL; 8, 196)               (ASL, Schlenker et al. 2013: 98)

(28)

  Schlenker et  al. (2013) account for Observation I  and Observation II by assuming that Nouwen is right that in English, as well as ASL and LSF, the grammar fails to make available a discourse referent for the complement set, i.e., the set of students who did not come; but that the mapping between plural loci and mereological sums preserves relations of inclusion and complementation, which in (27a) makes available the locus b. The main assumptions are that (a)  the set of loci is closed with respect to relative complementation: if a is a sublocus of b, then (b-​a) is a locus as well; and (b) assignment 514

515

Logical visibility and iconicity

functions are constrained to respect inclusion and relative complementation:  if a is a sublocus of b, the denotation of a is a subpart of the denotation of b, and (b-​a) denotes the expected complement set. These conditions are stated more completely in (29): (29) Conditions on loci Let LOC be the set of plural loci that appear in signing space, and let s an admissible assignment function that assigns values to loci. We make the assumptions in (a) and (b), where we view plural loci as sets of geometric points, and loci denotations as sets of individuals. a.  Conditions on LOC: for all a, b Î LOC, (i) a Í b or b Í a or a Ç b = Ø; (ii) if a Ì b, (b-​a) Î LOC b. Conditions on s: for all a, b Î LOC, (i) a Ì b iff s(a) Ì s(b); (ii) if a Ì b, s(b-​a) = s(b)-​s(a)   (ASL, Schlenker et al. 2013: 101) Since it is unusual to take a symbol to be part of another symbol, it should be emphasized that the notation a Í b is to be taken literally, with the locus (and thus symbol) a being a subpart of the locus b (this can for instance be further analyzed as: the set of points a in signing space is a subset of the set of points b in signing space). The condition a Ì b iff s(a) Ì s(b) should thus be read as: the locus a is a proper subpart of the locus b just in case the denotation of a is a proper subpart of the denotation of b.14 Let us now see how the conditions on loci in (29) derive our sign language data. In (27a), where embedded loci are used, we can make the following reasoning: • Since a is a proper sublocus of a large locus ab, we can infer by (29a-​ii) that (ab-​a) (i.e., b) is a locus as well; • by (29b-​i), we can infer that s(a) Ì s(ab); • and by (29b-​ii), we can infer that s(b) = s(ab)-​s(a). In this way, complement set anaphora becomes available because ASL can rely on an iconic property which is inapplicable in English. But this does not mean that there is a proper grammatical (non-​iconic) difference between these two languages: as we saw, with default loci the English data are replicated, which suggests that Nouwen’s assumption that the grammar does not make available a discourse referent for the complement set applies to ASL just as it does to English. Rather, it is because of iconic conditions on plural loci, not grammar in a narrow sense, that a difference does arise in the case of embedded loci.15

23.4.3  High and low loci In the preceding section, relations of inclusion and relative complementation among loci were shown to be preserved by the interpretation function. We now turn to cases in which the vertical position of loci is meaningful and argues for an iconic analysis as well. While loci are usually established in a single horizontal plane, in some contexts they may be signed high or low. Our point of departure lies in the inferences that are obtained with high and low loci in such cases. An ASL example without quantifiers, from Schlenker et al. (2013), is given in (30). In brief, high loci are used to refer to tall, important, or powerful individuals, whereas low loci are used to refer to short individuals

515

516

Philippe Schlenker

(similar data were described for LSF in Schlenker et al. (2013)). Loci of normal height are often unmarked and thus do not trigger any relevant inference. (30)

YESTERDAY IX-​1 SEE R

[= body-​anchored proper name].

IX-​1 NOT UNDERSTAND IX-​ahigh /​normal /​low

‘Yesterday I saw R [= body-​anchored proper name]. I didn’t understand him.’ (ASL; 11, 24; 25) a. 7 High locus. Inference: R is tall, or powerful/​important b. 7 Normal locus. Inference: nothing special c. 7 Low locus. Inference: R is short                     (ASL, Schlenker et al. 2013: 103) As can be seen, the relevant inferences are preserved under negation, which provides initial motivation for treating them as presuppositional in nature, a proposal that has been made about the semantic specifications of pronouns, such as gender, in spoken language (Cooper 1983). Importantly, high and low loci can appear under binding, with results that are expected from the standpoint of a presuppositional analysis. From this perspective, (31a) is acceptable because the bound variable heri ranges over female individuals; and (31b) is acceptable to the extent that one assumes that the relevant set of directors only comprises females. (31) a. [None of these women]i thinks that I like heri. b. [None of these directors]i thinks that I like heri. Similar conditions on bound high and low loci apply in (32) (here too, similar examples were described for LSF): (32)

NO TALL MAN THINK IX-​1 LIKE IX-​a

‘No tall man thinks that I like him.’ (ASL; 11, 27) a. 7 High locus b. 6 Normal locus c. 3 Low locus (ASL, Schlenker et al. 2013: 104) As argued in Schlenker et al. (2013) and Schlenker (2014), it will not do to treat height specifications of loci as contributing information about an intrinsic property of their denotations, for instance in terms of being tall or short. This is because in some of their uses they provide information about the spatial position of the upper part of a person’s body. This was shown by paradigms in which individuals appeared in several positions in standing or hanging position, upside down. In the latter case, pointing associated with a tall individual became low, in accordance with the general claims of Liddell (2003): the locus behaved like a structured representation of its denotation, with a head and a foot –​ hence when the individual was introduced as hanging upside down, the locus appeared in upside down position as well, and pointing towards the head of the locus implied pointing downwards. A formal analysis was developed for simple cases in Schlenker et al. (2013), based on the idea that height differences among loci should be proportional to the height differences

516

517

Logical visibility and iconicity

among their denotations. The analysis took as its starting point the presuppositional theory of gender features developed in Cooper (1983), given in (33):  a pronoun shei evaluated under an assignment function s refers to s(i), unless the presupposition triggered by the feminine features of she –​that its denotation is female –​is not satisfied. (33) Gender specifications Let c be a context of speech, s an assignment function and w a world (with cw = the world of c). [[shei]]c, s, w = # iff s(i) = # or s(i) is not female in cw. If [[shei]]c, s, w ≠ #, [[shei]]c, s, w = s(i).                 (ASL, Schlenker et al. 2013: 106) Schlenker et al. (2013) extend this presuppositional analysis to high and low loci, but with an iconic condition in the presuppositional part, boldfaced in (34). (34)

Height specifications Let c be a context of speech, s an assignment function and w a world (with cw = the world of c). If i is a locus, n is a locus with neutral height, h is a measure of the heights of loci in signing space, hc is a measure (given by the context c) of heights of objects in cw, and αc > 0 is a parameter given by the context c, [[IX-​i]]c, s, w = # iff s(i) = # or [(hc(i) ≠ hc(n) and hc(s(i)) –​ hc(s(n)) ≠ αc(h(i) –​ h(n))]. If [[IX-​i]]c, s, w ≠ #, [[IX-​i]]c, s, w = s(i).                 (ASL, Schlenker et al. 2013: 113)

As was the case in our analysis of plural loci in Section 23.4.2, loci have the semantics of variables, but their realization –​specifically: their height in signing space –​affects their meaning. In words, the condition in (34) considers a pronoun IX-​i indexing a locus i, and compares its height to that of a neutral locus n. It says that the height difference between the denotations s(i) and s(n) should be proportional to the height difference between the loci i and n, with a multiplicative parameter αc > 0; in particular, this condition imposes that orderings be preserved.16 While one might be tempted to posit a small number of heights along the vertical plane, Schlenker (2014) argues that a full-​fledged semantics is needed:  in paradigms involving astronauts training in a variety of positions and thus rotated in four positions, the loci seemed to track the position of the astronauts in a non-​discrete fashion. Finally, one could ask how integrated into the grammatical system height specifications are. We mentioned above that their semantics in Schlenker et al. (2013) was modeled after that of gender features, albeit with an iconic twist. Schlenker (2014) cautiously suggests that height specifications resemble gender features in another respect: they can somehow be disregarded under ellipsis. An example is given in (35a), where the elided VP has a bound reading, unlike its overt counterpart in (35b). On the (standard) assumption that VP ellipsis is effected by copying part the antecedent VP, this suggests that the feminine features of that antecedent can be ignored by ellipsis resolution, as represented with a barred pronoun in (35b).

517

518

Philippe Schlenker

(35) In my study group, a. Mary did her homework, and John did too. => available bound variable reading in the second clause b. Mary λi ti did heri homework, and John λi ti did [do heri homework] too c. Mary did her homework, and John did her homework too. => no bound variable reading in the second clause                   (ASL, Schlenker 2014: 309) Schlenker (2014) discusses analogous ASL and LSF examples in which height specifications can be ignored in a similar fashion, but the interpretation of these results requires some care. The main question is whether the ability of an element to be disregarded under ellipsis is only true of featural elements or targets a broader class. Schlenker (2014) did not give a final answer, and we will see below that co-​speech gestures in spoken language, which certainly do not count as ‘features’, can almost certainly be disregarded in this way. In conclusion, the various pronouns we have just discussed display a grammatical behavior as bound variables while also contributing iconic information about the position of their denotations. In this domain, sign language semantics has a more expressive semantics than spoken language, which is devoid of rich iconic mechanisms of pronominal reference.

23.5  Iconicity II: beyond variables As mentioned at the outset, iconic conditions are pervasive in sign language, and are definitely not limited to the semantics of variable-​like constructions. With no claim to exhaustivity, we discuss below three cases that have been important in the recent literature and are also of foundational interest.

23.5.1  Classifier constructions Some ‘classifier constructions’ were shown in Emmorey & Herzig (2003) to give rise to gradient iconicity effects in native signers of ASL. Specifically, they designed an experiment in which the position of a dot in relation to a bar could be indicated in a gradient fashion by way of a small object classifier ( -​handshape) positioned relative to a flat object construction ( -​handshape); and they showed that subjects could indeed recover gradient information from the relative position of the signs. While the formal analysis of such constructions is still under study, it is clear that one will need rules that make reference to iconic conditions. This can be achieved by directly incorporating iconic conditions in semantic rules, as we did for high and low loci above, and as was sketched for the case of classifiers in Schlenker (2011a). Alternatively, one could take these expressions to have a demonstrative component that makes reference to the gesture performed while realizing the sign itself, a proposal made in Zucchi (2011) and in Davidson (2015). An example from Zucchi (2011) is given in (36a) and paraphrased in (36b). (36) a. CAR CL-​vehicle-​DRIVE-​BY b. ‘A car drove by like this’, where the demonstration is produced by the movement of the classifier predicate in signing space (after Zucchi 2011) 518

519

Logical visibility and iconicity

Here, CL-​vehicle-​DRIVE-​BY is a classifier predicate used to describe the movement of vehicles. The movement of the classifier predicate in signing space tracks in a gradient fashion the movement performed by the relevant car in real space. As informally shown in (36b), Zucchi takes the classifier predicate to have a normal meaning (specifying that a vehicle moved) as well as a demonstrative component, which is self-​referential; in effect, the classifier predicate ends up meaning something like: ‘moved as demonstrated by this very sign’. We come back in Section 23.6 to the possibility that sign language semantics should quite generally make reference to a gestural component.

23.5.2  Event visibility or event iconicity? In our discussion of loci, we saw that these lead a dual life: on the one hand, they have –​ in some cases at least –​the behavior of logical variables; on the other hand, they can also function as simplified pictures of what they denote. As it turns out, a similar conclusion might hold of Wilbur’s cases of ‘Event Visibility’ discussed in Section 23.3.2: sign language phonology makes it possible to make visible key parts of the representation of events, but also to arrange them in iconic ways (see Kuhn (2015) and Kuhn & Aristodemo (2015) for a detailed discussion involving pluractional verbs, i.e., verbs that make reference to plurality of events). A case in point can be seen in (37), which includes three different realizations of the sign for UNDERSTAND, three stages of which appear in (38) (all the signs involve lowered eyebrows, represented as a ‘~’ above the sign). (37)

YESTERDAY MATHEMATICS PROOF IX-​1 UNDERSTAND

‘Yesterday I understood a mathematical proof.’ (LSF; 49, 27; 4 trials) Realization of UNDERSTAND: a. 7 normal ~ ~ b. 7 slow fast => difficult beginning, but in the end, I understood ~ ~ c. 5.5 fast slow => easy beginning, then more difficult, but I understood                                   (LSF, Schlenker 2018: 188) (38) Initial, intermedial, and final stage in the realization of (Schlenker 2018: 189)

519

UNDERSTAND

in (37b)

520

Philippe Schlenker

As illustrated in (38), UNDERSTAND is realized by the progressive closing of a tripod formed by the thumb, index, and middle finger of the dominant hand (right hand for a right-​handed signer). But different meanings are obtained depending on how the closure is effected. With a single change of speed, as in (37bc), the result is acceptable and semantically interpretable: if the sign starts slow and ends fast, one infers that the corresponding process had a similar time course; and conversely when the sign starts fast and ends slow (with two changes of speed, the results are deviant).17 Schlenker (2018) shows that similar iconic modulations can be obtained with the LSF atelic verb REFLECT. In the long term, two theoretical possibilities should be considered. One is that event iconicity should work alongside Wilbur’s Event Visibility Hypothesis, which should thus retain a special status, with discrete but covert distinctions of spoken language made visible in sign language. An alternative is that Wilbur’s data are a special case of event iconicity; on this view, telic and atelic verbs alike have the ability to map in a gradient fashion the development of an event, and it is for this more general reason that telic verbs mark endstates in a designated fashion (we come back to this point in Section 23.6).

23.5.3  Iconic effects in role shift We now turn once again to the issue of role shift. In Section 23.3.1, we suggested that role shift can be analyzed as a visible instance of context shift. But we will now see that this analysis is incomplete and must be supplemented with a principle that makes reference to iconicity. In brief, we suggest that role shift is a visible instance of context shift, but one which comes with a requirement that the expressions under role shift should be interpreted maximally iconically. The argument is in two steps. First, we suggest that role shift under attitude reports (= Attitude Role Shift) has a strong quotational component, at least in ASL and LSF. Second, we suggest that role shift in action reports (= Action Role Shift) has an iconic component. As was mentioned in Section 23.3.1.3, Schlenker (2017b, 2017c) notes that even in his ASL data, which allow for wh-​extraction out of role-​shifted clauses under attitude verbs, some tests suggest that these have a quotational component. First, an ASL version of the test discussed in (22), with ANY (which in some environments has a clear NPI behavior) suggests that it cannot appear under role shift without being quoted. Second, another test of indirect discourse based on licensing of ellipsis from outside the attitude report similarly fails. For simplicity, we will just lay out its logic on an English example: (39) Context: The speaker has recently had a political conversation with John. The addressee and John have never met each other. a. You love Obama. John told me that he doesn’t. b. (#) You love Obama. John told me: ‘I don’t.’ (ASL, Schlenker 2017c) In (39a), the elided VP in the second sentence is licensed by the first sentence, and one definitely does not infer that John’s words involved an elided VP. The facts are different in (39b), which clearly attributes to John the use of the very words I don’t –​hence a possible deviance if the context does not explain why John might have used a construction with ellipsis. While standard indirect discourse in ASL patterns like English with respect

520

521

Logical visibility and iconicity

to the licensing of ellipsis, the facts are different under role shift; there an elided VP is interpreted exactly as if it were quoted. Finally, some non-​linguistic properties of role-​shifted clauses must usually be attributed to the agent rather than to the signer, and in this respect role shift differs from standard indirect discourse and resembles quotation. Schlenker (2017c) established this generalization (which is unsurprising for the traditional sign language literature) by asking his consultants to sign sentences in which the signer displays a happy face, indicated by the smiley icon ‘:-​)’ in (40b), with the underline showing which signs were accompanied by this facial expression. Importantly, this happy face is not a grammaticalized non-​manual expression. The consultant was asked to start the happy face at the beginning of the report, to maximize the chance that it would be seen to reflect the signer’s (rather than the agent’s) happiness. In standard indirect discourse, this is indeed what was found. In Attitude Role Shift, by contrast, the judgments in (40) suggest that it is more difficult to attribute the happy face to the signer only, despite the fact that it starts outside the role-​shifted clause, and that the context is heavily biased to suggest that the agent of the reported attitude was anything but happy. (40)

SEE THAT ARROGANT FRENCH SWIMMER IX-​a? YESTERDAY IX-​a ANGRY.

‘See that arrogant French swimmer? Yesterday he was angry.’    

a. 6.2 

IX-​a SAY 

       RS-​a

IX-​1 WILL LEAVE.

‘He said: “I will leave.”.’             :-​)      

b. 4.6

IX-​a SAY 



  RS-​a

IX-​1 WILL LEAVE.

‘He said: “I will leave.”.’ (ASL; 14, 233) Rating under the meaning: The speaker is displaying his happiness that the French swimmer said he was leaving.      (ASL, Schlenker 2017c) At this point, one may conclude that despite the possibilities of wh-​extraction out of role shift discussed in connection to (20), role-​shifted clauses under attitude verbs just involve quotation (possibly mixed quotation, hence the possibility of wh-​extraction; see Maier (2014a, 2014b)). But as mentioned above, Schlenker (2017b, 2017c) argues (i) that role shift can also be applied, with specific grammatical constraints, to reports of actions rather than of attitudes (‘Action Role Shift’ vs. ‘Attitude Role Shift’); (ii) that in such cases, a quotational analysis would be inadequate, as the situations reported need not have involved thought or speech; and (iii) that nonetheless, role shift applied to action reports comes with a requirement that whatever can be interpreted iconically should be so interpreted. The suggestion is thus that Action Role Shift provides an argument against a quotational analysis and provides independent evidence for positing a rule of context shift, combined with a mechanism of ‘iconicity maximization’ under role shift. To get a sense for the main facts, consider (41): in this action report under role shift, the signer’s happy face is naturally taken to reflect the agent’s attitude. By contrast, in a control sentence without role shift, it can be taken to reflect the speaker’s attitude. Thus, under Action Role Shift, a happy face on the agent’s part is normally attributed to the agent (see Schlenker (2017c) for refinements, and for LSF data).

521

522

Philippe Schlenker

(41)

SEE THAT ARROGANT FRENCH SWIMMER IX-​a? YESTERDAY IX-​a ANGRY.

‘See that arrogant French swimmer? Yesterday he was angry.’                        RS-​a

a. 7

IX-​a

1-​WALK-​WITH-​ENERGY(CL-​ONE).

‘He left with energy.’                                   :-​)                       RS-​a

b. 3.6

IX-​a

1-​WALK-​WITH-​ENERGY(CL-​ONE). ‘He left with energy.’  (ASL; 14, 233) Rating under the meaning: The speaker is displaying his happiness that the French swimmer was leaving. (ASL, Schlenker 2017c)

Schlenker (2017c) took these and related to suggest that iconic material is preferably understood to reflect properties of the reported action under role shift. The analysis proposed in Schlenker (2017c) posits that Attitude and Action Role Shift alike should be analyzed as context shift, but with an important addition: expressions that appear under role shift should be interpreted maximally iconically, i.e., so as to maximize the possibilities of projection between the signs used and the situations they make reference to. Following a long tradition (e.g., Clark & Gerrig 1990), Schlenker (2017c) argues that quotation can be seen as a special and particular stringent case of iconicity, and that the condition of Maximal Iconicity can thus capture properties of both Attitude and Action Role Shift.18 If this analysis is on the right track, one key question is why context shift in sign language should come with a condition of iconicity maximization. One possibility is that such a condition exists in spoken language context as well but has not been described yet (however, Anand (2006) argues that in Zazaki context shift need not be quotational). Another possibility is that iconicity maximization under context shift is a specific property of sign language. Be that as it may, it seems clear that if role shift is to be analyzed as context shift, special provisions must be made for iconic effects.

23.6  Theoretical directions If the foregoing is on the right track, it should be clear that sign languages have, along some dimensions, strictly richer expressive resources than spoken language does, in particular due to their ability to incorporate iconic conditions at the very core of the logical engine of language. There are two conclusions one might draw from this observation. (i) One could conclude that spoken languages are, along some dimensions at least, a kind of ‘simplified’ version of what sign languages can offer. Specifically, as a first approximation one could view spoken language semantics as a semantics for sign languages from which most iconic elements have been removed, and indices have been made covert. From this perspective, if one wishes to understand the full scope of Universal Semantics, one might be better inspired to start from sign than from spoken language:  the latter could be understood from the former once the iconic component is disregarded, but the opposite path might prove difficult.

522

523

Logical visibility and iconicity

(ii) An alternative possibility is that our comparison between sign and spoken language was flawed in the first place; in Goldin-​Meadow and Brentari’s words (2015:  14), ‘sign should not be compared with speech –​it should be compared with speech-​plus-​ gesture’. What might be special about sign languages is that signs and co-​speech gestures are realized in the same modality. By contrast, they are realized in different modalities in spoken language, which has led many researchers to concentrate solely on the spoken component. This leaves open the possibility that when co-​speech gestures are reintegrated to the study of spoken language, sign and spoken languages end up displaying roughly the same expressive possibilities. Let us give a few illustrations of how the debate could be developed.

23.6.1  Plural pronouns We noted in Section 23.4.2 that plural pronouns in ASL and LSF can give rise to instances of ‘structural iconicity’ when a plural locus is embedded within another plural locus. One could view this as a case in which sign language has a mechanism which is entirely missing in spoken language. But the realization of sign language loci makes it possible to use them simultaneously as diagrams. From this perspective, the right point of comparison for our examples with ‘complement set anaphora’ in Section 23.4.2 are spoken language examples accompanied with explicit diagrams with the same shape as embedded loci in (28), and to which one can point as one utters the relevant pronouns. For this reason, a comparison between spoken and sign language should start with situations in which speakers can use gestures to define diagrams. This comparison has not been effected yet.

23.6.2  High loci As summarized in Section 23.4.3, it was argued in Schlenker et al. (2013) and Schlenker (2014) that high loci have an iconic semantics, and in addition that their height specifications behave like ‘features’ in some environments, notably under ellipsis: just like gender features, height specifications can apparently be disregarded by whatever mechanism interprets ellipsis resolution. We fell short of arguing that this shows that height specifications are features, for good reason. First, Schlenker (2014) shows that it is hard to find cases in which height specifications really behave differently from other elements that contribute presuppositions on the value of a referring expression (some paradigms displaying this difference were found in ASL but not in LSF). Second, when co-​speech gestures are taken into account in spoken languages, it appears that they too can be disregarded under ellipsis (Schlenker 2018).19 Thus in (42a), the co-​speech gesture (for a tall person) that accompanies the verb phrase can be disregarded under ellipsis; whereas in the control in (42b), deviance is obtained if the gesture that accompanies the antecedent verb phrase is explicitly repeated in the second clause (whereas a gesture for a short person is acceptable).

523

524

Philippe Schlenker

(42) I had two guys standing in front of me, one of them very short and the other one very tall. a. The tall one allowed me to remove didn’t.

[his glasses], but the short one [his glasses], but the short one

b. The tall one allowed me to remove

[his glasses] /​ok [his glasses].             (Schlenker 2018: 198)

didn’t allow me to remove #

These observations suggest that one could account for height specifications of loci in at least two ways. One could analyze them by analogy with features in spoken language, and argue that they share their behavior under ellipsis. Alternatively, one could seek to analyze height specifications as co-​speech gestures that happen to be merged with signs, and to explain their behavior under ellipsis by the fact that other co-​speech gestures can somehow be transparent to ellipsis resolution.

23.6.3  Role shift20 We suggested above that role shift is ‘visible context shift’, with an important addition:  Attitude and Action Role Shift alike have an iconic component (‘Maximize Iconicity!’) which has not been described for spoken language context shift. But one could challenge this analysis by taking role shift to be in effect indicative of the fact that the role-​shifted signs have a demonstrative, gestural component, and thus are in effect both signs and co-​speech gestures. This is the theoretical direction explored by Davidson (2015). Following Lillo-​Martin (1995, 2012), Davidson takes role shift to behave in some respects like the expression ‘be like’ in English, which has both quotational and co-​speech uses, as illustrated in (43). (43) a. b.

John was like ‘I’m happy’. Bob was eating like [gobbling gesture].

(Davidson 2015: 487, 489)

More specifically, Davidson suggests that in role shift the signer’s body acts as a classifier and is thus used to demonstrate another person’s signs, gestures, or actions. She draws inspiration from Zucchi’s analysis of classifier constructions, briefly discussed in Section 23.5.1 above. Thus, for Davidson, no context shift is involved; rather, the signer’s body is used to represent another individual in the same way as the classifiers discussed in Section 23.5.1 represent an object. A potential advantage of her analysis is that it immediately explains the iconic effects found in role shift, since by definition role shift is used to signal the presence of a demonstration. We refer the reader to Schlenker (2017c) for a comparison between the context-​shifting and gestural analyses.

23.6.4  Telicity Strickland et al. (2015) revisit Wilbur’s Event Visibility Hypothesis, discussed in Sections 23.3.2 and 23.5.2 above. They show that non-​signers that have not been exposed to sign 524

525

Logical visibility and iconicity

language still ‘know’ Wilbur’s generalization about the overt marking of telic endpoints in sign language: when asked to choose among a telic or atelic meaning (e.g., ‘decide’ vs. ‘think’) for a sign language verb they have never seen, they are overwhelmingly accurate in choosing the telic meaning in case endpoints are marked. Furthermore, this result holds even when neither meaning offered to them is the actual meaning of the sign, which rules out the possibility that subjects use other iconic properties to zero in on the correct meaning. These results can be interpreted in at least two ways. One is that Wilbur’s principle is such a strong principle of Universal Grammar that it is accessed even by non-​ signers. An alternative possibility is that these use general and abstract iconic principles to determine when a sign/​gesture can or cannot represent a telic event. This leaves open the possibility that Event Visibility derives from a general property of cognition rather than from specific properties of sign language –​and possibly that similar effects could be found with co-​speech gestures in spoken language (see Schlenker (2017a) for potential differences between iconic enrichments in signs vs. gestures).

Acknowledgments Many thanks to Karen Emmorey, Maria Esipova, and Brent Strickland for relevant discussions. This chapter summarizes research that appeared in various articles, which owe a lot to the work of the following consultants: ASL –​Jonathan Lamberton; LSF –​ Yann Cantin and Ludovic Ducasse. Their contribution is gratefully acknowledged. The research leading to these results received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/​ 2007–​ 2013) /​ERC grant agreement No. 324115–​FRONTSEM (PI: Schlenker). Research was conducted at Institut d’Études Cognitives, École Normale Supérieure –​PSL University. Institut d’Études Cognitives is supported by grant FrontCog ANR-​17-​EURE-​0017. The research reported in this piece also contributes to the COST Action IS1006.

Notes 1 We use the term ‘logical’ loosely, to refer to primitive distinctions that play a key role in a semantic analysis. Some of our examples are logical categories in the strict sense (e.g., logical variables), but others are not (e.g., the representation of aspectual classes). We avoid the term ‘LF visibility’ because it might be associated with a different idea, namely that a particular level of representation posited in some generative approaches, called ‘LF’, might be more transparently visible in some languages than in others. It has sometimes been claimed, for instance, that the ‘LF position of quantifiers’ is relatively transparently represented in Hungarian (e.g., Brody & Szabolcsi 2003); this syntactic claim is of course distinct from the hypothesis discussed here. 2 We state the hypothesis existentially, as being about some mechanisms that are covert in spoken language but overt in sign language. This is in fact a well-​worn type of argument in semantic typology. For instance, Szabolcsi 2000 (following Kiss 1991) argues that Hungarian ‘wears its LF on its sleeve’ because the scope of quantifiers is disambiguated by their surface position. 3 This section borrows from Schlenker et al. (2013) and Schlenker (2014). 4 While pointing can have a variety of uses in sign language (Sandler & Lillo-​Martin 2006; Schlenker 2011a), we will restrict attention to pronominal uses. 5 For simplicity, we gloss IX-​a as he, but without context, this pronoun could just as well refer to a female. On a theoretical level, we note that in order to provide a formal treatment of (10), we might need to posit a rule of ‘locus erasure’ –​a point we will return to in Section 23.2.4. 6 This section borrows from Schlenker (2013a,b).

525

526

Philippe Schlenker 7 The eyebrow raise accompanies only the pronoun. Eyebrow raising is regularly found in topic and focus positions in general, and on if-​clauses in particular; we include it because this non-​ manual marker appeared in the original transcriptions cited here. 8 This section borrows from Schlenker (2016). 9 This section borrows from Schlenker (2017b). 10 At this point, we assume that quotation must target an entire clause, and we come back below to the possibility of a theory with ‘partial quotation’, as argued in Maier (2014b). 11 In the end, Quer (2013) suggests on the basis of syntactic evidence that Attitude Role Shift in LSC can in some contexts involve quotation, but that in other contexts it involves bona fide indirect discourse with some shifted indexicals. For present purposes, we are only concerned with cases that can be shown not to involve standard quotation. 12 Informant 2 gave HERE = LA with a rating of 5/​7; HERE = NYC with a rating of 2.5/​7. 13 This section borrows from Schlenker (2013b) and Schlenker et al. (2013). 14 If we wanted to state an analogous condition in a more standard system in which the variables are letters rather than loci, we could for instance require that the denotation s(v) of a variable called v should be a subpart of the denotation s(w) of a variable called w because graphically v can be viewed as a subpart of w. Because inclusion of one symbol in another is so uncommon with letters, this would of course be a very odd condition to have; but it is a much more natural condition when the variables are loci rather than letters. 15 One additional remark should be made in connection with our discussion of the debate between the analyses of loci as variables vs. as features (in Section 23.2.4). It is notable that the locus b in (27a) is not inherited by way of agreement, since it is not introduced by anything. From the present perspective, the existence of this locus is inferred by a closure condition on the set of loci, and its interpretation is inferred by an iconic rule. But the latter makes crucial reference to the fact that loci have denotations. It is not trivial to see how this result could be replicated in a variable-​free analysis in which loci do not have a denotation to begin with. Presumably, the complement set locus would have to be treated as being deictic (which is the one case in which the variable-​free analysis has an analogue of variable denotations). This might force a view in which complement set loci are handled in a diagrammatic-​like fashion, with co-​speech gestures incorporated in signs –​a point to which we return in Section 23.6. 16 Since bodies are not points, further hypotheses were needed to determine which parts of locus denotations mattered in the relevant ordering; an initial hypothesis is that when it comes to people, their upper bodies matter: ( i) Partial hypothesis (slightly modified from Schlenker et al. 2013): When evaluating the height of loci denotations, a. the position of ca is assessed by considering the real or imagined position of the upper part of the body of ca in cw; b. if s(i) is a person, the position of s(i) corresponds to the position of the upper part of the body of s(i) in cw. 17 In this example, the facial expressions remain constant, with lowered eyebrows throughout the realization of the sign, encoded as ‘~’ on the relevant parts of the sign; more natural examples are obtained when the facial expressions are also modulated, and in such cases, more changes of speed can be produced and interpreted –​but in these more complex examples it is difficult to tease apart the relative role of the manual vs. non-​manual modulation in the semantic effects obtained. 18 More specifically, putting together the non-​ iconic (context-​ shifting) part of the analysis developed in Section 23.3.1 and these iconic conditions, the theory has the following structure: (i) Role shift has a broadly uniform semantics across attitude and action cases: it shifts the context of evaluation of the role-​shifted clause. (ii) In ASL and LSF, role-​shifted indexicals are obligatorily shifted. Things are different in LSC and DGS, where mixing of perspectives is possible. (iii) In ASL and LSF, all indexicals can appear under Attitude Role Shift, but only some indexicals can appear under Action Role Shift (this was captured formally by assuming that Action Role Shift gives rise to different kinds of shifted contexts than Attitude Role Shift).

526

527

Logical visibility and iconicity (iv) Under Attitude and Action Role Shift alike, signs are interpreted maximally iconically in the scope of the context shift operator. • In attitude reports, every sign can be interpreted as being similar to an element of the situation which is reported –​namely by way of quotation. • In action reports, this is not so (as these need not involve speech or thought acts), but all potentially iconic features of signs are interpreted iconically and thus taken to represent features of the reported situations. In both cases, expressions that appear under role shift are both used (as these are instances of indirect discourse) and mentioned because they have a strong iconic (and sometimes quotational) component. 19 More sophisticated work on this issue is being conducted by John Gajewski at University of Connecticut. 20 This paragraph borrows from Schlenker (2017c).

References Anand, Pranav. 2006. De De Se. Santa Cruz, CA: University of California PhD dissertation. Anand, Pranav & Andrew Nevins. 2004. Shifty operators in changing contexts. In Robert Young (ed.), Proceedings of SALT XIV, 20–​37. Ithaca, NY: CLC Publications. Bhatt, Rajesh & Roumanyana Pancheva. 2006. Conditionals. In Martin Everaert & Henk van Riemsdijk (eds.), The Blackwell companion to syntax, Vol. 1, 638–​687. Oxford: Blackwell. Bittner, Maria. 2001. Topical referents for individuals and possibilities. In Rachel Hastings, Brendan Jackson, & Zsofia Zvolenszky (eds.), Proceedings of Semantics and Linguistic Theory XI, 36–​55. Ithaca, NY: CLC Publications. Brody, Michael & Anna Szabolcsi. 2003. Overt scope in Hungarian. Syntax 6. 19–​51. Clark, Herbert H. & Richard G. Gerrig. 1990. Quotations as demonstrations. Language 66. 764–​805. Cooper, Robin. 1983. Quantification and syntactic theory. Synthese Language Library 21. Cuxac, Christian. 1999. French Sign Language: Proposition of a structural explanation by iconicity. In A. Braort et al. (eds.), Gesture-​based communication in human-​computer interaction, 165–​ 184. Dordrecht: Springer. Cuxac, Christian & Marie-​ Anne Sallandre. 2007. Iconicity and arbitrariness in French Sign Language: Highly Iconic Structures, degenerated iconicity and diagrammatic iconicity. In Elena Pizzuto, Paola Pietrandrea, & Raffaele Simone (eds.), Verbal and signed languages: Comparing structures, constructs and methodologies, 13–​33. Berlin: Mouton de Gruyter. Davidson, Kathryn. 2015. Quotation, demonstration, and iconicity. Linguistics and Philosophy 38(6). 477–​520. Emmorey, Karen. 2002. Language, cognition, and the brain: Insights from sign language research. Mahwah, NJ: Lawrence Erlbaum. Emmorey, Karen & Melissa Herzig. (2003). Categorical versus gradient properties of classifier constructions in ASL. In Karen Emmorey (ed.), Perspectives on classifier constructions in signed languages, 222–​246. Mahwah, NJ: Lawrence Erlbaum. Finer, Daniel. 1985. The syntax of switch-​reference. Linguistic Inquiry 16(1). 35–​55. Goldin-​Meadow, Susan & Diane Brentari. 2015. Gesture, sign and language: The coming of age of sign language and gesture studies. The Behavioral and Brain Sciences 1. 1–​82. Heim, Irene & Angelika Kratzer. 1998. Semantics in Generative Grammar. Oxford: Blackwell. Herrmann, Annika & Markus Steinbach. 2012. Quotation in sign languages  –​a visible context shift. In Ingrid van Alphen & Isabelle Buchstaller (eds.), Quotatives: Cross-​linguistic and cross-​ disciplinary perspectives, 203–​228. Amsterdam: John Benjamins. Hockett, Charles F. 1966. What Algonquian is really like. International Journal of American Linguistics 31(1). 59–​73. Hübl, Annika & Markus Steinbach. 2012. Quotation across modalities: Shifting contexts in sign and spoken languages. Talk presented at the workshop ‘Quotation:  Perspectives from Philosophy and Linguistics’, Ruhr-​University Bochum. Kaplan, David. 1989. Demonstratives. In Josep Almog, John Perry, & Howard Wettstein (eds.), Themes from Kaplan. Oxford: Oxford University Press.

527

528

Philippe Schlenker Kegl, Judy. 2004. ASL syntax:  Research in progress and proposed research. Sign Language & Linguistics 7(2). 173–​206. [Reprint of an MIT manuscript written in 1977]. Kiss, Katalin É. 1991. Logical structure in linguistic structure. In C.-​T. James Huang & Robert May (eds.), Logical structure and linguistic structure, 387–​426. Dordrecht: Kluwer. Koulidobrova, Elena. 2011. SELF: Intensifier and ‘long distance’ effects in American Sign Language (ASL). Manuscript, University of Connecticut. Kuhn, Jeremy. 2015. Iconicity in the grammar:  Pluractionality in (French) Sign Language. Talk presented at the 89th Meeting of the Linguistic Society of America, Portland, OR. Kuhn, Jeremy. 2016. ASL loci: Variables or features? Journal of Semantics 33(3). 449–​491. Kuhn, Jeremy & Valentina Aristodemo. 2015. Iconicity in the grammar: Pluractionality in French Sign Language. Manuscript, New York University and Institut Jean-​Nicod. Liddell, Scott K. 2003. Grammar, gesture, and meaning in American Sign Language. Cambridge: Cambridge University Press. Lillo-​Martin, Diane. 1991. Universal Grammar and American Sign Language: Setting the null argument parameters. Dordrecht: Kluwer. Lillo-​Martin, Diane. 1995. The point of view predicate in American Sign Language. In Karen Emmorey & Judy S. Reilly (eds.), Language, gesture, and space, 155–​170. Hillsdale, NJ. Lawrence Erlbaum. Lillo-​Martin, Diane. 2012. Utterance reports and constructed action. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 365–​387. Berlin: De Gruyter Mouton. Lillo-​Martin, Diane & Edward S. Klima. 1990. Pointing out differences: ASL pronouns in syntactic theory. In Susan D. Fischer & Patricia Siple (eds.), Theoretical issues in sign language research, Volume 1: Linguistics, 191–​210. Chicago: University of Chicago Press. Maier, Emar. 2014a. Mixed quotation (Survey article for the Blackwell Companion to Semantics). Manuscript, University of Groningen. Maier, Emar. 2014b. Mixed quotation: The grammar of apparently transparent opacity. Semantics & Pragmatics 7. 1–​67. Meier, Richard. 2012. Language and modality. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 574–​601. Berlin: De Gruyter Mouton. Nouwen, Rick. 2003. Plural pronominal anaphora in context. Utrecht: University of Utrecht PhD dissertation. Okrent, Arika. 2002. A modality-​free notion of gesture and how it can help us with the morpheme vs. gesture question in sign language linguistics, or at least give us some criteria to work with. In Richard P. Meier, David G. Quinto-​Pozos, & Kearsy A. Cormier (eds.), Modality and structure in signed and spoken languages, 175–​198. Cambridge: Cambridge University Press. Partee, Barbara. 1973. Some structural analogies between tenses and pronouns in English. The Journal of Philosophy 70. 601–​609. Quer, Josep. 2005. Context shift and indexical variables in sign languages. In Effi Georgala & Jonathan Howell (eds.), Proceedings from Semantics and Linguistic Theory (SALT) 15, 152–​168. Ithaca, NY: CLC Publications. Quer, Josep. 2013. Attitude ascriptions in sign languages and role shift. In Leah C. Geer (ed.), Proceedings of the 13th Texas Linguistics Society Meeting, 12–​ 28. Austin, TX:  Texas Linguistics Forum. Rothstein, Susan. 2004. Structuring events: A study in the semantics of lexical aspect. Malden, MA: Blackwell. Sandler, Wendy & Diane Lillo-​ Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press. Schlenker, Philippe. 1999. Propositional attitudes and indexicality:  A cross-​categorial approach. Cambridge, MA: MIT PhD dissertation. Schlenker, Philippe. 2003. A plea for monsters, Linguistics & Philosophy 26. 29–​120. Schlenker, Philippe. 2004. Conditionals as definite descriptions (a referential analysis). Research on Language and Computation 2(3). 417–​162. Schlenker, Philippe. 2011a. Iconic agreement. Theoretical Linguistics 37(3–​4). 223–​234. Schlenker, Philippe. 2011b. Donkey anaphora:  The view from sign language (ASL and LSF). Linguistics and Philosophy 34(4). 341–​395. Schlenker, Philippe. 2011c. Indexicality and de se reports. In Klaus von Heusinger, Claudia Maienborn, & Paul Portner (eds.), Semantics 2, 1561–​1604. Berlin: Mouton de Gruyter.

528

529

Logical visibility and iconicity Schlenker, Philippe. 2013a. Temporal and modal anaphora in sign language (ASL). Natural Language and Linguistic Theory 31(1). 207–​234. Schlenker, Philippe. 2013b. Anaphora:  Insights from sign language (summary). In Stephen R. Anderson, Jacques Moeschler, & Fabienne Reboul (eds.), The language-​cognition interface (Actes du 19e Congrès International des Linguistes, Genève, 22–​27 juillet 2013), 83–​107. Geneva: Librairie Droz. Schlenker, Philippe. 2014. Iconic features. Natural Language Semantics 22(4). 299–​356. Schlenker, Philippe. 2016. Featural variables. Natural Language and Linguistic Theory 34. 1067–​1088 Schlenker, Philippe. 2017a. Iconic enrichments:  Signs vs. gestures. Behavioral and Brain Sciences 40.  41–​42. Schlenker, Philippe. 2017b. Super monsters I. Attitude and action role shift. Semantics & Pragmatics 10(9).  1–​65. Schlenker, Philippe. 2017c. Super monsters II: Role shift, iconicity and quotation in sign language. Semantics and Pragmatics 10(12). 1–​67. Schlenker, Philippe. 2018. Visible meaning:  Sign language and the foundations of semantics. Theoretical Linguistics 44(3–​4). 123–​208. Schlenker, Philippe, Jonathan Lamberton, & Mirko Santoro. 2013. Iconic variables. Linguistics & Philosophy 36(2). 91–​149. Schlenker, Philippe & Gaurav Mathur. 2013. A strong crossover effect in ASL. Snippets 27. 16–​18. Stone, Matthew. 1997. The anaphoric parallel between modality and tense. IRCS Report 97–​06, Philadelphia, PA: University of Pennsylvania Press. Strickland, Brent, Carlo Geraci, Emmanuel Chemla, Philippe Schlenker, Meltem Kelepir, & Roland Pfau. 2015. Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases. Proceedings of the National Academy of Sciences 112(19). 5968–​5973. Szabolcsi, Anna. 2000. The syntax of scope. In Mark Baltin & Chris Collins (eds.), Handbook of contemporary syntactic theory, 607–​634. Oxford: Blackwell. Taub, Sarah F. 2001. Language from the body. Iconicity and metaphor in American Sign Language. Cambridge: Cambridge University Press. Wilbur, Ronnie B. 2003. Representations of telicity in ASL. Chicago Linguistic Society 39. 354–​368. Wilbur, Ronnie B. 2008. Complex predicates involving events, time and aspect:  Is this why sign languages look so similar? In Josep Quer (ed.), Signs of the time. Selected papers from TISLR 2004, 217–​250. Hamburg: Signum. Wilbur, Ronnie B. & Evie Malaia. 2008. Event Visibility Hypothesis: Motion capture evidence for overt marking of telicity in ASL. Paper presented at Linguistic Society of America Annual Meeting, Chicago. Zucchi, Sandro. 2011. Event descriptions and classifier predicates in sign languages. Talk presented at Formal and Experimental Advances in Sign Language Theory (FEAST), Venice, June 2011. Zucchi, Sandro. 2012. Formal semantics of sign languages. Language and Linguistics Compass 6(11). 719–​734.

529

530

24 NON-​M ANUAL MARKERS Theoretical and experimental perspectives Ronnie B. Wilbur

24.1  Introduction Because spoken languages do not have grammatical non-​manual markers (NMMs), sign language researchers have tried to figure out the best way to analyze them, but without any guidelines from spoken language grammars. Here we evaluate what linguistic functions are served by NMMs. To accomplish this, we need to avoid some distractions, such as a discussion of whether NMMs came from gestural/​affective facial expressions. This is because, even if they did (Benítez-​Quiroz et  al. 2016), it does not tell us much about what they currently contribute to sign language grammar. There is substantial research on the differences between affective facial expressions (for expressing emotions) and the grammaticalized NMMs that are the focus of this chapter (see Section 24.6). Knowing that a NMM is derived from a particular facial expression can provide direction to, or support for, an analysis, but it is not a linguistic analysis by itself. The reader needs to be alert to this distinction to understand what this chapter is about. The cumulative evidence from more than 40 years of investigation supports accepting the notion that NMMs are part of sign language grammar (meaning they must be learned as part of learning the language). However, because NMMs are not obviously parallel to something in spoken languages, there has been speculation about what they do: are they part of the phonological, morphological, syntactic, semantic, pragmatic, or prosodic components? More likely, they are distributed across these components; indeed that there is ‘debate’ results from just this fact. In addition, sign languages differ, in that they can use the same NMMs for different purposes. It is also possible that related sign languages may use the same or similar NMMs in the same way, whereas unrelated sign languages may use them differently. We can multiply these possibilities. But we can also raise serious questions about whether it is correct to assume that linguistic components are independent, that is, whether it makes sense to talk about NMMs being ‘in a component’. But that is what the puzzle is –​an attempt to identify which ways of talking about NMMs make the best sense. To evaluate these options, we adopt the notions that “solutions should make predictions” and “the best solution makes correct predictions”; suggested solutions must be tested to determine whether their predictions are correct. This is how science works. 530

531

Non-manual markers

24.1.1  Overview of argumentation and testing claims Two brief illustrations will help with the evaluation below. Coulter (1978: 68) suggested (erroneously) that American Sign Language (ASL) phrases with raised eyebrows “all describe background information: information which describes the scene or situation in terms of which the rest of the sentence is to be interpreted. […] [N]‌one of the phrases which have raised eyebrows are assertions.” In ASL, these structures include topics, conditionals, yes/​no-​questions, relative clauses, when-​phrases, focus structures (topicalization/​focalization, wh-​clefts, focus particles ‘even’ and ‘only’), left dislocations, and material before focused modals and negatives; in short, there is a long, seemingly unconnected list of structures (according to Coulter’s claim), where raised eyebrows are expected to occur. Sandler (2010) calls Coulter’s analysis ‘pragmatic’, related to discourse structure or information status (another early pragmatic account is Engberg-​Pedersen (1990)). But 20  years afterwards, it was noticed, on careful examination, that some structures which met Coulter’s description “not an assertion” did not have raised brows, and worse, some structures that were not predicted to have raised brows, because they contained new asserted information, did have raised brows (Wilbur & Patschke 1999). Thus, his claim failed on two counts: incorrectly predicting there should be raised brows when there are not, and incorrectly predicting that there should not be raised brows where there are. This example serves as a cautionary note: science proceeds by testing all assumptions and results. Because we all thought Coulter’s claim made sense, no one actually checked it. The second example demonstrates how careful testing can separate different behaviors. First, Baker-​Shenk (1983: 63) analyzed NMMs added to the sign WRONG to create the idiomatic meaning ‘unexpectedly’: brow raise (AU 1+2),1 eyes widen (AU 5), mouth opening (AU 26). She tested whether each piece is itself a separate morpheme (semantic component) and concluded instead that the total configuration is necessary to contribute the meaning ‘surprise’. In contrast, a second analysis showed that the headshake and tongue protrusion accompanying the sign NOT-​YET are each separate morphemes: the headshake contributes negation while the tongue carries ‘lack of control, attention, or completion’. Baker-​Shenk’s analysis makes three claims for ASL: (i) the meaning ‘surprise’ will (generally) include all three components of brow raise, eyes widen, and mouth opening; (ii) negation in general will be accompanied by headshake; and (iii) tongue protrusion will carry the meaning ‘lack of control’ and will occur not only with negation (as in NOT-​YET), but also without it (that is, it is independent of negation). Each of these claims can be (and has been) separately tested. This is what we mean when we say that the best solution makes correct predictions, and that each solution must be tested to determine whether its predictions are correct.

24.1.2  Overview of the chapter The recent debate concerning the ‘status’ of NMMs in sign languages can be seen as a web of intersecting questions. Among them are:  (i) how do we analyze their status when we compare them to spoken languages, (ii) how do we analyze their functions, (iii) how do we analyze their combinations, (iv) how do we analyze their coverage, that is, what signs they may appear with, (v) are they really special markers in sign languages or are they simply different forms of what is already marked in spoken languages? Given all these questions, it is easy to imagine how complicated the current picture of NMMs in sign languages really is. In the theoretical portion of this chapter, the issues will be approached by providing some historical background (Section 24.2), explaining 531

532

Ronnie B. Wilbur

what is known about the interaction of syntax, semantics, and prosody to clarify what options are really available (Section 24.3), presenting analyses of NMMs that challenge prevailing assumptions (Section 24.4), and evaluating what all this tells us (Section 24.5). In the experimental portion (Section 24.6), studies of the acquisition of NMMs will be presented (Section 24.6.1), followed by production studies (Section 24.6.2), and then processing studies (Section 24.6.3). For a description of the NMMs themselves, see van der Hulst & van der Kooij (Chapter 1). It is also assumed that readers are thoroughly familiar with the content of the chapter on prosody (Fenlon & Brentari, Chapter 4). In the following discussion, we will omit NMMs clearly associated to specific lexical items because, except in very rapid signing, they do not spread onto adjacent signs, and, because we use spreading behavior as a test, the issue of how to interpret their behavior does not arise (Elliott (2013) concludes that facial expressions over single lexical items are phonological; although see Pfau (2016) for a discussion of spreading of lexical ‘mouthing’, a bilingual interface phenomenon, as similar to tonal spread over phonological phrases).

24.2  Historical development of the treatment of NMMs Over time, the NMMs debate has resulted in two ‘sides’ –​typically in the form of asking whether NMMs are syntactic or prosodic. Herrmann (2013: 166–​179) provides a summary of how the debates have been framed. For the syntactic analysis, she notes that NMMs instantiate syntactic features, and that NMMs either spread along the syntactic c-​command domain of the triggering feature or are triggered by Spec-​head relations. For the prosodic analysis, she highlights evidence for non-​isomorphism between syntactic and prosodic alignment, compositionality (multiple NMMs combined), and pragmatic context-​dependency. Here, non-​isomorphism means that the syntactic structure and the prosodic structure do not exactly match (more below). It should come as no surprise that it is hard to find researchers taking either extreme position; authors more commonly consider both approaches and take one as convenient for discussion of their data (as Herrmann (2013) also does). This means that this chapter cannot divide the literature into two groups and identify who is in each group. In fact, my own research highlights this clearly: Wilbur (1991) talked about NMMs as intonation (in the prosodic domain); Wilbur & Patschke (1999) noted the correlation between certain NMMs and specific syntactic structures; Wilbur (1995) discussed brow raise as an overt morphological marker of a particular syntactic (possibly semantic) feature; Wilbur (2000) talked about NMMs as compositional layers of semantic information on top of the hands and on top of each other; Wilbur (2011) argued that brow raise differs from other ASL NMMs because it represents a different type of semantic operator than the others. While some people find it frustrating that I seem to frequently change my mind, in fact science progresses steadily as we continually test predictions, with more data, until we get it right. Just as I can say that my earlier work did not ‘get it right’, I can also show how problems remain with other claims that have been made. The purpose of doing this is not to claim that I am right and they are not, but to show the reader how claims can be tested in sign language linguistics as they are elsewhere in linguistic science.

24.2.1  Background on NMM analysis The earliest analyses of ASL NMMs were from the syntactic side (Liddell 1977, 1978, 1980; Padden 1983; Aarons 1994; Wilbur & Patschke 1999; Neidle et al. 2000; see further 532

533

Non-manual markers

discussion in Section 24.4). For example, Bahan (1996) discussed the use of head tilt and eye gaze to convey agreement of a verb with its subject and object (one could also argue that this is morphological, or even morphosyntactic). His claim was that head tilt indicated subject agreement and eye gaze marked object agreement, with one exception, namely in the case of a first-​person object, because signers cannot direct their eye gaze at themselves, so first-​person object agreement must instead be indicated by head tilt, switching jobs with eye gaze to show subject agreement. With intransitive verbs, which have only one NP, head tilt, eye gaze, or both may show agreement (but see the discussion of Thompson et al. (2006) in Section 24.6.3.1). MacLaughlin (1997) demonstrated similar marking for agreement with possessor and possessed NPs inside DPs. But it would be wrong to put them on the syntactic ‘side’ with the implication that they rejected prosodic options; instead, their main points were that NMMs should be treated like other parts of grammar and not as just signing ‘style’ (also Kegl & Wilbur 1976). In all of these early analyses, when comparisons with spoken language and sign languages grammars were made, it was made clear that to leave out the NMMs would miss a major portion of the sign language grammar, which would erroneously give the impression that sign languages do not have the same level of grammatical complexity as spoken languages (cf. Ichida (2010) for his explanation for why some people mistakenly believe that Japanese Sign Language does not have syntax, or has the same syntax as spoken Japanese; see Section 24.4.2.3). Likewise, Sandler, who is a strong proponent of the prosodic approach, makes it clear that she is excluding certain NMMs that have other functions, such as lower face adverbs, and thus her prosodic proposal is not about all NMMs but only some NMMs (specifically head and upper face). She argues against a ‘direct syntactic’ approach rather than every syntactic approach, as some NMMs have both syntactic and prosodic contributions (Weast 2008). Sandler (2010) argues that the ‘principal argument’ against the direct syntax approach is non-​isomorphism of syntax and prosody. As explained below, this position cannot be maintained, as non-​isomorphy is often the expected result rather than a systematic problem; to capture the facts, we are required to adopt a more dynamic model of prosody. As with any theoretical and empirical science, understanding of how detailed pieces fit together requires that changes must be made to models over time (this is one reason why my own work has looked at NMMs differently over the years). To understand the NMMs debate, we need to consider one of those models now, namely the interaction between syntax and prosody (putting aside semantics, pragmatics, and information structure, temporarily). It is easy to show that syntactic constituents do not always align with prosodic constituents (this is non-​isomorphy). As a speaker of a fast speech New  York dialect, I am able to reduce the question ‘Did you eat yet?’ to two syllables [d͡ʒi:t ʝɛt] in normal conversation, with the first three words reduced to one syllable. No syntactic model predicts this result. However, this does not mean that this New  York dialect has ‘no syntax, only prosody’. It also does not mean that this question does not have syntactic structure before it comes out of my mouth. We must recognize where prosody fits into the model of grammar that we are building; for simplicity, we assume that it comes after other parts are put together. Similarly, changes in signing rate (a prosodic effect) change the number of NMMs that occur in productions but not the signs that are covered by them (cf. Wilbur 2009; details in Section 24.6.2 below). At faster signing rates, signers do not bother to lower the brows between phrases, so instead of three separate brow raise, only one (longer) may be observed, yet all the signs that are supposed to be in the brow raise domain are still covered by brow raise. That is, if we simply count brow raises, 533

534

Ronnie B. Wilbur

there are fewer in faster signing than in normal or slow signing; the same is true for other NMMs. There is no one-​to-​one mapping of syntax to prosodic constituents in sign languages or spoken languages. The existence of non-​isomorphy is related to production rate (among other variables). The next challenge is to provide an analysis when we know the signing rate can have such a major effect. Selkirk has spent years trying to capture the relation between syntactic constituency and prosodic constituent domains for sentence-​level phenomena in spoken languages. In Selkirk (2011), she provided a revised model of the relationship between syntax and prosody, in which there is an interface component that includes syntactic-​prosodic constituency correspondence constraints, which, fortunately, covers everything we need.2

24.3  The interaction of syntax, semantics, and prosody To understand Selkirk’s revision, we need to back up to the prosodic analysis that has been offered for sign languages, the one that participates in the puzzle to be solved in this chapter. Sandler (2012: 69) offers the following suggestion for determining prosodic status: “A useful working assumption is that a cue is prosodic if it corresponds to the functions of prosodic cues known from spoken language. […] A further test is whether the distribution of the cue in question is determined by the domain of prosodic constituents, where these can be distinguished from morpho-​syntactic constituents.” Thus, to be a prosodic cue in sign languages, there must be a match with functions observed in spoken languages, and there must be evidence in the language that the domains of prosodic constituency may be different from morphosyntactic constituency. To use my reduced New  Yorker question again, it is clearly the case that the prosodic structure of the resulting two syllables wrapped in a question intonation does not match the syntactic phrasing of a question with the auxiliary verb ‘did’ moved to the front of the subject ‘you’ followed by the verb phrase composed of the verb ‘eat’ and a temporal modifier ‘yet’; thus, we would want to say that the prosodic structure is (relatively) independent of the underlying syntactic structure. From this traditional view, prosodic structure is built from the Prosodic Hierarchy (see Fenlon & Brentari, Chapter 4). Sandler bases her model on Nespor & Vogel (1986); their interpretation of how the Prosodic Hierarchy applies to sign languages presupposes that only the hierarchy is relevant, that is, that prosodic structure is determined solely by phonological constituency, and there is no assumed relationship between those units and the categories in the syntactic representation (Nespor & Sandler 1999). From this view, non-​isomorphism is a mismatch problem that provides evidence for independence of the prosodic structure (although it remains unclear why the independence of prosody would bear on the function of NMMs). In contrast, Selkirk’s (2011) main point is that looking only at syntax or only at prosody misses the contribution of the correspondences between the two, that is, a true syntax-​phonology interface.3 Her Match Theory posits a set of universal Match correspondence constraints that interact with language-​particular phonological constraints which result in forms that are either isomorphic or non-​isomorphic with syntactic structure. It is this view that best advances our treatment of NMMs in sign languages. After all, I can still slow down and ask ‘Did you eat yet?’ with four syllables. If we use my New  Yorker question as an example, in the ideal prosodic structure (from the Prosodic Hierarchy with no attention to syntax), there would be four syllables produced, and because in this example each word is a single syllable, there would be four 534

535

Non-manual markers

prosodic words, which would be grouped into a number of prosodic phrases (depending on the definition used), and finally into one intonational phrase at the top (the whole question itself). What happens when my New Yorker speech is produced is that, like the effects of signing rate, there is a reduction of the number of prosodic constituents due to Fast Speech Phenomena. That is, there is further readjustment of prosodic structure after the ideal prosodic structure is constructed, resulting in only two syllables (note that the question intonation is still kept above them). The final prosodic structure no longer matches the original syntax at the phrasal level inside the question, nor is the prosodic structure obviously built from the Prosodic Hierarchy, which would have required four syllables and four prosodic words. This readjusted prosody makes it harder to see what is put there by the syntax and what comes from prosodic readjustments afterwards. It also should make it perfectly clear that not only do syntax and prosody not always match during production in spoken language, it is expected that they will not always match. We may view isomorphism as a theoretical ideal, with the ideal speaker producing the ideal speech at the ideal rate. Slowing or speeding that rate results in readjustments to prosodic structure, also for sign languages and their NMMs (cf. Wilbur 2009). However, our picture of prosodic structure is still not complete. We put aside semantics, pragmatics, and information structure to get this far, but now they need to be brought back to understand why Selkirk’s revision is especially relevant. Sandler (2010: 306) discusses the reduction (‘restructuring’) of a verb and its object into a single phonological phrase (pp) in Israeli Sign Language (ISL), as in (1a) (see also Figure 4.3 in Fenlon & Brentari, Chapter  4). She also observes that a verb phrase containing a noun phrase that itself contains an adjective phrase could occur in separate phonological phrases, yielding three phonological phrases of one sign each, and that when the phrases are larger (have more words), separate phrases are more likely. In (1b), I show just the verb and its object in separate phonological phrases, that is, the pre-​restructured form. (1)  a.  [BAKE CAKE]pp [TASTY]pp b. [BAKE]pp [CAKE]pp c. [CAKE TASTY]pp

(ISL, Sandler 2010: 306)

The two forms differ in prosodic phrasing but not in syntactic phrasing (both VP), leading to Sandler’s (2010) conclusion that syntactic structure is not a reliable predictor of prosodic structure. We have already seen that with my New Yorker question, for which we saw prosodic readjustment in the form of a reduction of the number of prosodic constituents (among other phonetic modifications). For a listener not familiar with the New Yorker reduction rules, my question would be hard to understand. But there are rules to my dialectal production (other people also produce this, and they understand it), and those rules can be learned. Likewise, there are rules that enhance, rather than reduce, prosodic structure. Semantics, pragmatics, and information structure are involved in these enhancements. Sandler (2010) notes that adjectives (which normally follow nouns in ISL) can also form one larger pp with the preceding noun (CAKE TASTY), as shown in (1c), but if the adjective is modified by an intensifier (VERY), then such reduction is blocked. Intensifiers interact with prosodic structure, in this case to enhance the prominence of TASTY, which has the result of blocking reduction of the prosodic constituent containing TASTY into the prosodic constituent containing CAKE. Likewise, focus for new information, contrastive focus of two (or more) options or correction of a prior statement will all enhance some piece of the utterance and prevent reduction due to rate. In really slow production, every word 535

536

Ronnie B. Wilbur

may feel enhanced, making it more like a list of words rather than a regular sentence. So, our prosodic model has both reduction and enhancement for various purposes, among them production efficiency and information foregrounding/​contrast (for ASL NMMs, see Wilbur (1994b) and Wilbur & Patschke (1998)). Consistent with Sandler’s description, our prosodic model is dynamic, in that it must allow for different conversational factors to interact with prosodic constituency. Given this kind of a prosodic model, the question arises as to how we construct prosodic constituency in the first place. Selkirk (2011) notes that standard prosodic theory considers prosodic constituency to be defined by two properties unrelated to syntactic constituency –​(a) the Prosodic Hierarchy, and (b) Strict Layering. The Prosodic Hierarchy builds phrases, each composed of units from the layer directly below it on the hierarchy, starting with syllables combined into feet, which are then combined into prosodic words, then prosodic phrases, and finally intonational phrases. Strict Layering restricts these combinations so that an upper layer is clearly composed only from units of the next lower layer. Selkirk argues that the Strict Layering hypothesis cannot be maintained. Instead, she proposes Match Theory, which calls for a starting match between syntactic and prosodic constituents (see also Kratzer & Selkirk 2007; Katz & Selkirk 2011). Each syntactic level (word, phrase, clause) must be matched with prosodic constituency in phonological representation (by corresponding ι, ϕ, ω prosodic constituent types, respectively). This match requirement predicts that phonological and syntactic domains will be closely matched (at least as a default) at the start of a derivation (before rate effects and contrast enhancements). However, language-​specific phonological markedness constraints may result in violations of Match constraints, which then can lead to non-​isomorphism (my New Yorker question ends up with two syllables, but dialect constraints prevent three or only one). This approach (a) provides initial derivation of prosodic constituency from syntactic constituency (not merely from lowest layers on the Prosodic Hierarchy), and (b) identifies the relevant factors and circumstances under which prosodic constituency may be altered, resulting in non-​isomorphism. That is, the mismatches are not simply random, and prosodic constituency is not a completely independent system; thus, one cannot say that something ‘is prosodic’ without at the same time making a concurrent claim that it is also syntactic (from which prosodic constituency is derived) or also phonological (markedness constraints may induce violations of isomorphism).4 Thus, to consider the issue again –​are NMMs syntactic or prosodic? –​current thinking on both syntax and prosody make the question ill-​formed in the first place. A more appropriate question might be ‘Which factors are involved in the construction of an observed production?’ For example, given Sandler’s discussion above of what happens with the sequence CAKE VERY TASTY, the presence of the intensifier VERY would block prosodic reduction. Intensification can be viewed as enhancement resulting from semantic intent –​the function of intensification is to widen the scale of the adjective it modifies (here: TASTY). But this is not the only factor involved. Suppose the full utterance is ‘This cake is very tasty’, then ‘very tasty’ is part of the assertion (new information) of the sentence. However, a follow-​up statement by another signer could be ‘I also thought the cake was very tasty but the icing is too sweet for me’. Here, ‘very tasty’ is now part of the background of the utterance (it has already been introduced into discourse by the preceding utterance by the first signer), and the new information (assertion) is ‘the icing is too sweet for me’.5 Reduction to [CAKE VERY TASTY]pp might be perfectly possible in this second situation, but we are not given enough information to know if this is possible in ISL. What this shows is that the rules involved in prosodic constituency are not hard and 536

537

Non-manual markers

fast; what may be blocked in one situation may be allowed in another because there are interfaces between prosody and other components of the grammar. Information structure and discourse status (given, new; presupposition, assertion; backgrounded, focused, contrastively focused) also affect the ultimate prosodic constituency seen in the realized production. Given this possibility, we have to seriously question whether showing that there is non-​isomorphism between syntax and prosody provides an argument that NMMs should be treated as ‘prosodic’. In fact, it does not tell us whether the NMM is a grammatical morpheme, whether it is syntactically associated, whether it is semantically necessary, or whether it is pragmatically relevant, because the correct analysis could easily be ‘all of the above’, given how these factors interact in natural languages. From a perspective similar to Selkirk’s, Pfau (2005) tentatively associates NMMs spreading with prosodic domains of decreasing size, each associated with a portion of the syntactic tree:  intonational phrase (‘syntactic spreading’), phonological phrase (‘prosodic spreading’), and phonological word (‘prosodic linking’) (Figure 24.1). Using a Cinque-​style (1999) tree, Pfau (2005) identifies an outer functional layer above NegP, containing the CP projections that host topics, focus, and various operator-​like elements, with NMMs that spread from this domain constrained syntactically. In the inner functional layer, below NegP and above v(oice)P, morphology controls specifications, and there is more prosodic influence on spreading. Finally, the lowest layer is the lexical layer, in which NMMs are determined by lexical entries and co-​occurrence with manual signs is prosodic linking (also see Pfau 2016). Thus, as Selkirk suggests, in Pfau’s model, the prosodic domain is derived from the syntactic domain, and the observed spreading depends on where in the syntactic tree the NMMs are hosted.

Figure 24.1  Division of syntactic tree into three layers (cf. Pfau 2005: 2)

537

538

Ronnie B. Wilbur

We need one additional note about syntactic spreading here before we proceed. Wilbur (2011) notes that the NMMs discussion in the literature has generally failed to consider that there is a clear spreading domain difference (at least in ASL) for lowered brows (for [+wh]) and negative headshake ‘neg hs’ (both c-​command) as compared to raised brows (which does not spread over its c-​command domain). This distinction will be critical as we evaluate the various proposals.

24.4  Analyses of NMMs that present challenges to prosodic analyses 24.4.1  Syntactic approaches Several papers (Wilbur & Patschke 1999; Neidle et  al. 2000) did explicitly argue that head/​upper face NMMs are under ‘purely’ syntactic control in ASL. That is, the reason why a NMM is present at all is because it reflects syntactic features or structures. As would be expected for a syntactic function, the spreading domains of the NMMs are determined by syntactic constituency. For example, NegP contains a [+neg] feature in its head Neg, and possibly also a manual negative sign (e.g., NOT). If a negative sign is present, spreading of the NMM ‘negative headshake [hs]’ is optional –​that is, it could be only on the negative sign or it could spread over all the signs in the c-​command domain of the negative. If there is no negative sign, then spreading must be over the c-​command domain (the scope of the negative, usually VP). If spreading occurs, all the signs to the right of the head of NegP, that is, the syntactic c-​command domain of Neg, are covered by the negative headshake (2). Thus, the feature [+neg] is responsible for the presence of ‘negative headshake’, and the syntactic constituency determines its spreading. (2) 

Negation in a yes/​no-​question in ASL (Bahan 1996: 55)             q              neg JOHN NOT LIKE MARY

‘Doesn’t John like Mary?’ While this is true for ASL, the same NMMs can have different behaviors in different sign languages. Pfau & Quer (2002) argued that negative headshake in ASL is syntactically controlled, but that in German Sign Language (DGS) and Catalan Sign Language (LSC), it is morphological. Furthermore, the morphological behavior differs in LSC and DGS. Pfau & Quer show that in DGS and LSC, both SOV languages, the negative sign occurs after the verb and is covered by the negative headshake. Also, in both languages, the negative headshake must be bound to a negative sign, but the similarity ends there: in DGS with the negative sign present, the preceding verb must also be covered by headshake, otherwise the structure is ungrammatical, whereas in LSC the preceding verb is only required to have the negative headshake when the negative sign is not present. In contrast, ASL is an SVO language, and the negative sign, if present, comes before the verb. If the negative sign is present, the headshake may cover just it (3a) or may spread over the whole VP (3b). However, if the negative sign is not present, the negative headshake must spread over the whole VP (verb plus object if present) (3c), with headshake just on the verb being ungrammatical (3d). Neither DGS nor LSC requires the headshake to spread onto the object (which precedes the verb in SOV). Thus, the ‘same’ NMM may have different status in different sign languages (also see Gökgöz, Chapter 12). 538

539

Non-manual markers

(3)  Spreading options for negative headshake in ASL (Neidle et al. 2000: 44–​45) hs a.  JOHN NOT BUY HOUSE           hs b. JOHN NOT BUY HOUSE       hs c. JOHN BUY HOUSE hs d. * JOHN BUY HOUSE ‘John didn’t buy a house.’ A different syntactic function of a head NMM is shown in Turkish Sign Language (TİD) by Göksel & Kelepir (2013), who propose that ‘head tilt’ itself signals a pragmatic prompt for a response. However, it is the head position that indicates the syntactic clause type: ‘head forward’ for polar (yes/​no) questions and ‘head backward’ for content ([+wh]) questions. The way they view it, head tilt is the signal for an interrogative (in their terms, ‘demand-​for-​response’), with forward or backward as interrogative sub-​ types. Head forward may be accompanied by a head nod, and head backward may be accompanied by a headshake, but interestingly, the accompanying nod or shake does not always have the same domain as the head forward or backward. They tentatively conclude that headshake and nod minimally coincide with the predicate (i.e., not the whole interrogative clause); thus the syntactic domains are not even the same for all the NMMs involved. The head tilt is present as a pragmatic signal, but the actual ‘tilt’ position is a result of the clause syntax. Inside the clause, the nod or headshake would be associated to a smaller syntactic constituent, the VP. If the prosodic constituencies are then derived from the syntactic ones, as Selkirk (2011) suggests, we would expect the head tilt (backward or forward) over a larger domain than the nod or headshake, as seen in (4); note also that (4a) shows an interruption of ‘head forward (hf)’, which could result from lexical, pragmatic, or other sources. (4)  a.  Yes/​no-​questions                 hf

  head forward       head nod

NOW NEXT SUMMER HOLIDAY GO THINK INDEX2

‘Are you now thinking of going on a holiday next summer?’ (TİD, Göksel & Kelepir 2013: 13–​14) b. Content questions  

    

  

head backward headshake

PERSON WORK WHAT DO WHAT

‘What (kind of) work does s/​he do?’ 

(TİD, Göksel & Kelepir 2013: 14)

The explanation provided by Pfau & Quer (2002) for negative headshake includes consideration of language-​specific requirements stemming from syntax and/​or morphology. Göksel & Kelepir’s (2013) explanation of head behavior in TİD includes requirements from pragmatics and syntax. Next, we consider additional semantic requirements in ASL.

539

540

Ronnie B. Wilbur

24.4.2  Semantics There have been at least two analyses that have suggested explanations of NMMs involving semantics in addition to syntax. On the one hand, Wilbur (2011) showed that semantic operator type (dyadic or monadic; more details below) provides a better explanation of the difference in spreading behavior of eyebrow and head positions than the purely syntactic analysis offered in Wilbur & Patschke (1999). On the other hand, Churng (2011) argued that three different semantic interpretations of a particular sequence of wh-​signs depend on syntax, and that the location of differences in prosodic pauses among the three interpretations could be predicted from the syntactic derivation. In other words, Churng’s work is important because it illustrates how semantics, syntax, and prosody are all intertwined: if you change the syntax, the prosody changes, and then you get a different semantic interpretation.6

24.4.2.1  An explanation for the alternative spreading domain for brow raise in ASL The observation that needs to be explained is this: some NMMs spread over large domains all the way to the end of a sentence, no matter how many signs there are between where the NMM is triggered (whatever turns it on, for example, negation for negative headshake) and the end of the sentence, but there is at least one NMM (so far), namely brow raise in ASL, that (unexpectedly) stops spreading before then, even if there are more signs before the end of the sentence. If we want to be able to teach ASL students when to turn brow raise on and off, we have to be able to explain why it is different from other NMMs. Once we understand this difference, we can then also look for similar differences in other sign languages, although maybe the NMM that behaves differently will not be brow raise but something else. Unless we look for it systematically, we are not likely to find it. Initially, Wilbur & Patschke (1999) raised the issue of syntactic spreading domains, namely that typical examples of syntactically-​ motivated spreading domains are c-​ command based. That is, once the trigger/​source for a NMM is identified (whether a feature or sign, e.g., negative or [+wh]), the NMM spreads, covering all the signs that are c-​commanded by the location of that source.7 However, there is a split, at least in ASL, between NMMs that have c-​command as spreading domains and those, like brow raise, that do not. As mentioned, the earliest analysis of brow raise in ASL was the pragmatic one by Coulter (1978), but Wilbur & Patschke argued against that explanation. The problem that remained was how to explain the spreading difference? At that time, Wilbur & Patschke (1999) attempted to provide a purely syntactic solution (A-​bar position, reflecting a special operator-​variable relationship), which reflected the facts but really did not provide an explanation for why there was a difference. The explanation is more linguistically complicated, involving semantics at a level that most people are not familiar with. Thus, it will take a bit longer to explain. But, as mentioned above, in the meantime, Sandler (1999a, 1999b, 1999c, 2005, 2010, 2012; Dachkovsky & Sandler 2009) claimed that NMMs in general should be viewed as parallel to the prosodic function of intonation in speech; for example, intonation can make the difference between a declarative and an interrogative. Likewise, Sandler & Lillo-​ Martin (2006) claimed that lowered brows reflect the discourse function of a content question expecting more than a simple yes/​no answer, and that brow raise (‘br’) signals the discourse function of ‘forward looking, prediction’. If this were true, it would be a lot

540

541

Non-manual markers

simpler to explain than the more complex semantic reason we end up needing. Consider their claim by looking at an example they use (5).        

(5) 

    br

EXERCISE CLASS, I HOPE SISTER SUCCEED PERSUADE MOTHER TAKE-​UP

‘The exercise class, I hope my sister manages to persuade my mother to take (it).’ (ASL, Padden 1983: 76)

Their proposal is that ‘exercise class’ has ‘forward-​looking/​continuation’ brow raise because more information will follow in the sentence (in fact, the whole rest of the sentence follows, without brow raise). But this explanation raises a question: why is there no brow raise on ‘I hope’ (which is followed by ‘my sister manages…’), or on ‘I hope my sister manages’ (followed by ‘to persuade my mother…’), or on ‘I hope my sister manages to persuade my mother’ (followed by ‘to take (it)’)? In short, why is there no brow raise on everything that is not the last word (or phrase, or clause), since each is in fact followed by more information? If we evaluate the ‘forward looking brow raise’ proposal using the question ‘Does the explanation correctly predict when the NMMs will appear?’, we would have to say that it does not, because we do not know why the brow raise is missing where we might expect it, given their proposal. Likewise, compare some additional ASL examples in (6)  to (12), all of which have brow raise; in each case, the spread is not over the c-​command domain, that is, there is no spread over everything to the right until the end of the sentence.8 (6)  Relative clauses (ASL, Wilbur & Patschke 1999: 9; after Liddell 1978)           br

a. 

1

ASK3 GIVE1 DOG [[ URSULA KICK e] THAT]DP

b.

DOG BITE1 [[ CHASE CAT BEFORE] THAT]DP

c.

[[  DOG CHASE CAT]CP]DP BARK

‘I asked him to give me the dog that Ursula kicked.’    

  

    br

‘The dog bit me that chased the cat before.’              br

(7)

‘The dog that chased the cat barked.’

WH-​cleft (ASL, Wilbur & Patschke 1999: 10)9                   br SHE GAVE HARRY WHAT, NEW SHIRT

‘What she gave Harry was a new shirt.’ (8)

Different types of topics (ASL, adapted from Aarons 1994: 176)   br

   

    br

JOHNi, VEGETABLE, IX-​3i PREFER ARTICHOKE

‘As for John, as for vegetables, he prefers artichokes.’ (9)

Topic and sentential subject (ASL, adapted from Lillo-​Martin 1986: 425)

541

542

Ronnie B. Wilbur

   br

                br

aBILL,

bMARY

KNOW aINDEX, NOT^NECESSARY

‘As for Bill, that Mary knows him is not necessary.’ (10) 

Focused NP (that-​cleft) (ASL, Wilbur & Patschke 1999: 15) Context: And what about Fred? What did he eat?  

br

BEANS THAT, FRED EAT

‘It’s beans that Fred ate.’ (11) Conditional (ASL, Baker & Padden 1978: 51)               br (SUPPOSE/​IF) TOMORROW RAIN, I NOT GO BEACH

‘If it rains tomorrow, I won’t go to the beach.’ Note the near-​minimal pair in (12), a focused modal with brow raise in (12a) compared to modal doubling with no brow raise allowed in (12b). (12) a.  Focused can’t (ASL, Wilbur & Patschke 1999: 11)                  br BUT STAY HOME ALL-​DAY EVERY-​DAY CAN’T

b.

‘But (I discovered that) I can’t stay home all day, every day.’ Doubled can’t (followed by doubled must) (ASL, Wilbur & Patschke 1999: 17) Context: Signer wants to stay home after birth of baby but discovers she can’t:     br FIND PRO.1 CAN’T STAY HOME CAN’T, MUST GO WORK MUST

‘(However), I found that I couldn’t stay home; I had to go back to work.’ The problem identified by these examples can be stated simply: if brow raise pragmatically indicates ‘forward looking, prediction’, or if head/​upper face NMMs are prosodically driven with an intonation-​like function (as per Sandler (2010)), why does the brow raise just stop? Also, why in (8) and (9) are there separated brow raises in slower signing that disappear in faster signing? These are not isolated examples, but representatives of classes of structures across multiple contexts, regions, signers, and researchers –​that is, the data for ASL are not in dispute (although similarly extensive data for other sign languages are lacking). To understand what is happening, we need to introduce the notion of semantic operators. The concept is fairly simple: an operator applies to an input to create a different output. For example, given the sentence ‘Endigo is tall’, a negation operator (in English ‘not’) will change its meaning to ‘Endigo is not tall’. Operators can change assertions into questions and obligations (‘must’) into permissions (‘may’), for instance, and they may have other meaning-​changing functions. A language may have a word or morpheme that performs the operator function (such as ‘not’), or use intonation or stress, or other surface marking. Learning a language fluently requires learning how each language marks these semantic differences.

542

543

Non-manual markers

What Wilbur & Patschke (1999) observed is that the type of semantic operator made a difference in the spreading behavior of NMMs. For example, [+neg] and [+wh] operators both allowed the NMMs to spread until the end of the sentence, whereas a collection of structures with a [-​wh] operator displayed the brow raise stopping sooner. This latter group included topics (as in (5) and (8) above), conditionals (11), relative clauses (6a–​c), focus structures (10), among others. However it was not until Wilbur’s (2011) analysis that the reason for the difference between the two groups was identified, namely, that there are two types of operators active in ASL, monadic ([+wh], [+neg]) and dyadic/​restrictive ([-​wh]). NMMs associated with a monadic operator spread over the c-​command domain, whereas NMMs associated with a dyadic operator only mark the restriction of the operator but not the whole nuclear scope, hence stopping before the end of the sentence. We need to explain this difference. Monadic operators take a single input argument and apply to it. Negation is a monadic operator: it scopes over what it negates (‘Endigo is not tall’). Likewise, a [+wh]-​operator scopes over what it focuses. In ASL, we actually see this: negation is marked by a negative headshake over its c-​command domain (scope), and the [+wh]-​operator is marked with brow lowering over its c-​command domain (scope). In contrast, a dyadic operator not only has a scope (the main part of the sentence), it also has a restriction (which information is relevant for the interpretation of the main part of the sentence), thus requiring two pieces of information (hence the term ‘dyadic’). Dyadic operators relate two semantic constituents to each other (cf. Krifka et al. 1995), one of which restricts the domain in which variables in the other may be interpreted. The operator binds variables in the restriction; those in the nuclear scope are bound by (monadic) existential closure. For example, if we have a sentence that begins with a conditional clause (see example (11) above), it provides the conditions under which the next clause is likely to hold. We say that the conditional operator has a restriction (the conditions contained in the conditional clause) and a main/​nuclear scope (the content in the consequence). In ASL, only the conditional clause is covered by brow raise, not the consequence clause; that is, only the restriction has brow raise, not the main/​nuclear scope. Without ‘brow raise’ on the first clause, the two clauses are read as conjoined, not as a conditional followed by a main clause. Thus, brow raise has a narrowly defined domain and cannot spread over its c-​command domain. Dyadic operators are present in various constructions cross-​linguistically:  conditionals, interrogatives, focus structures, relatives, and generics (cf. Lewis 1975; Partee 1991; Carlson & Pelletier 1995; Chierchia 1995). As Wilbur (2011) observes, all such dyadic operators in ASL are marked with brow raise on their restrictions. Thus, the semantic operator analysis provides an explanation for the list of ASL structures that take brow raise and for the absence of syntactic c-​command spreading. Although brow raise is relatively common across other sign languages for marking topics and possibly other structures (but see discussion of relative clauses in Branchini, Chapter  15, and in Wilbur (2017)), we do not currently have evidence that in other sign languages, there is the same systematic behavior for brow raise (or for that matter, any other NMMs) of the kind seen in ASL. For this type of evidence, we would need to see a systematic occurrence of a NMM over all and only the restrictions of those structures with dyadic operators, which, because they are semantic, should be the same cross-​linguistically. But this is not a necessity: some sign languages might divide dyadic operators into subgroups, with different NMMs for the restriction on each (this is not

543

544

Ronnie B. Wilbur

semantically predicted, but is morpho-​phonologically possible), or every dyadic operator could have its own NMMs; likewise, monadic operators could be subdivided. In short, the systems of NMMs have not been investigated in sufficient detail to allow us to understand how they work across sign languages. Finally, we can raise the question of whether pragmatic, syntactic, or prosodic approaches can provide the same level of explanation that the semantic operator analysis does. At the same time, we must emphasize that this does not mean that prosody is not involved. Consider (13a) and (13b), which fundamentally rely on prosody to signal the two different semantic interpretations and different syntactic derivations.     br

(13)  a. 

t ‘(Jim doesn’t like to tease Jane.) It’s Mary who Jim loves to tease.’ MARY, JIM LOVE TEASE

    br

b.

Æ ‘As for Mary, Jim loves to tease her.’ MARY, JIM LOVE TEASE

(ASL, Wilbur 1997: 97)

Focus information in (13a) provides contrast (as a correction) to information that the sender believes the receiver holds. In (13b), Mary is (re-​)established as a topic, and the assertion is that Jim loves to tease her. Prosodically, MARY in (13a) is the focus and must receive the primary stress for the sentence; in (13b), MARY is backgrounded and the primary sentence stress appears in the main clause JIM LOVE TEASE. Such distinctions are not predicted by the standard Prosodic Hierarchy, but do fall out straightforwardly from Selkirk’s model with the added syntax-​prosody interface. We need to look at one additional distinction between brow lowering for [+wh] and brow raise for [-​wh] to get the full picture of what is involved, namely their behavior under embedding (Wilbur 1996: 250). In ASL as well as in English, the verb ‘wonder’ takes a [+wh] embedded interrogative complement (marked with brow lowering (‘bl’) in ASL), as shown in (14). (14)  a. English: Mary wondered what /​*that Susan bought.                

b.  ASL: 

        

c.

ASL:

  bl

MARY WONDER [SUSAN BUY WHAT]

        bl

MARY WONDER [WHAT SUSAN BUY]

Because a wh-​cleft (roughly ‘what X did was Y’) is not a [+wh] complement in either English or ASL (despite the presence of a wh-​word), it is not possible to embed it under ‘wonder’, as shown in (15) (note the brow raise on one portion of the ASL clause in (15b)). (15)

a.

English: *Mary wondered [what Susan bought was a new suit].

b.

ASL: *MARY WONDER [SUSAN BUY WHAT, NEW SUIT]

        br

Evidence that the wh-​cleft is a plain assertion comes from the fact that it can be embedded under factive verbs like ‘know’ in both English and ASL, see (16).

544

545

Non-manual markers

(16)  a.  English: Mary knows (that) [what Susan bought was a new suit].         br

b. ASL: MARY KNOW

[SUSAN BUY WHAT, NEW SUIT]

Examples (14–​16) show that the relationship between the NMMs and the syntax and semantics is more complex than it first appears. These examples also document that generalizations that are made by articulation type (lowering vs. raising) or by articulator (brow) can miss important differences in how each works in a given sign language. Finally, notice that the brow raise in (16b) does not start until the embedded clause. If as Sandler (2010) argues, brow raise indicates that there will be more information to follow, we are left without an explanation for why the brow raise does not start at the beginning of the sentence. We turn now to another example of the interaction of syntax, semantics, and prosody.

24.4.2.2  Information structure (focus) and marking syntactic derivations with prosody Churng (2011) provides derivations for a triplet of ASL wh-​questions which have the same signs in the same order, but with different NMMs depending on which part of the question is in focus (see Figure 24.2). Churng extends Wagner’s (2005) syntax-​prosody interaction to ASL:  first, that prosodic stress marking reflects adjustments resulting from phrasal movements, and second, that prosodic breaks/​gaps mark different levels of embedding. Prosodic ‘reset’ is what happens when ongoing NMMs end and are then followed immediately by the onset of different NMMs for the next function. Prosodic ‘gap’ is defined as holding and/​or pausing during articulation. It is expected that a prosodic break will follow every prosodic phrase (at an ideal signing rate). These gaps mark each left-​peripheral functional projection that is crossed by an A-​bar moved element. These gaps accumulate, reflecting the number of levels of syntactic hierarchy that have been crossed (see data from Grosjean’s work in Section 24.6.2.2). Thus, both prosodic resets and gaps are predictable from the syntactic structure. Churng (2011) shows that with the two notions of reset and gap, she is able to distinguish the three possible meanings. Prosodic reset distinguishes NMMs for regular wh-​ marking (just a question) and for focus wh-​marking. Prosodic gaps are the results of syntactic structure and movement. The first meaning (17) is ‘What foods did you eat for what reasons?’ A possible answer could be a list: ‘I ate oatmeal, and I ate it because it makes me feel healthy; caviar, because it makes me feel wealthy; …’. A second meaning (18) is ‘What foods did you eat, and why did you eat at all?’ A possible answer is ‘Just a snack –​I didn’t realize it was so close to dinner time.’ Churng provides the context and possible answer for the third meaning (19) ‘I heard you started your low-​cholesterol diet with breakfast this morning. What did you eat, and why did you eat it? (A: I ate oatmeal, and I ate it because it’s heart-​healthy.)’

(17)

       wh

 foc

YOU EAT, WHAT,

WHY

What foods did you eat, why? ‘What foods did you eat for what reasons?’

545

(ASL, Churng 2011: 10)

546

Ronnie B. Wilbur

  

(18)

wh   foc

YOU EAT,

WHAT,

foc WHY

What and why did you eat? ‘What foods did you eat, and why did you eat at all?’    wh

(19)

YOU EAT,

  foc

 foc

WHAT,

WHY

What foods did you eat, why? ‘What foods did you eat, and why did you eat it?’

(ASL, Churng 2011: 10)

(ASL, Churng 2011: 11)

In (17), which has regular wh-​marking followed by focus wh-​marking, there is a prosodic reset between them. Each time movement crosses a left-​peripheral functional projection, a prosodic reset results, thus ensuring that prosodic domains reflect derived phrases, and that multiple phrases in the left periphery are separated from one another. The commas in (17) to (19) represent required gaps, which are represented in Figure 24.2 by a vertical bar. Churng (2011) derives (17) with three syntactic operations, each of which leaves a break (including the final one):  (i) WHY moves to FocusP; (ii) WHAT moves to SpecCP2 (normal wh-​movement), and (iii) YOU EAT moves to Topic position adjoining CP.

Figure 24.2  Three different productions of multiple wh-​questions. Straight vertical line = ‘gap’, on/​off power button symbol = ‘reset’ of NMM (Churng 2011: 39–​40)

Churng (2011) argues that, unlike (17), (18) and (19) are both underlyingly biclausal, which is represented in the English translation by overt conjunction (‘and’). In the ‘at all’-​reading (18; Figure 24.2b), two separate TPs are extracted to the left periphery, with a prosodic gap between WHAT and WHY but without NMMs reset, as both are wh-​marked and focus-​marked. In the ‘it’-​reading (19), one large TP is moved, so there is no break between WHAT and WHY, and also no reset between them because again, they are both wh-​marked and focus-​marked. Churng’s analysis exactly matches the NMMs resets and gaps to the differences between the three syntactic derivations and meanings (the reader is referred to Churng’s work for details). To accomplish this, she did not need to make any special assumptions about ASL, syntax, or prosody. We take this as additional evidence for a model like Selkirk’s with three components: syntax, prosody/​phonology, and a syntax-​phonology interface.

24.4.2.3  A closer look at the full variety of head positions and movements Ichida (2004, 2010) conducted a comprehensive analysis of head positions, movements, and functions in Japanese Sign Language (JSL). Ichida (2010:  16) argues that the 546

547

Non-manual markers

suggestion that the role of non-​manual behaviors is the signed analogue of intonation (pitch) in spoken languages is incorrect because the role of NMMs “is far larger than that of intonation in spoken languages”. He analyzed five head movements: nod (down), nod (up), shake, thrust, and change of head position. Each can be combined with four chin positions: up, down, forward, and back, for 20 combinations of movement and position. He further noted two distinct timing options (simultaneous/​after/​repeat; hold/​release), yielding 40 combinations just for the head. Ichida also indicates that the three relevant eyebrow positions (up, down, and neutral; Wilbur & Patschke (1999)) give him a full total of 120 meaningful combinations of head/​chin/​brow. Some of these combinations determine sentence types; he argues that the head movement is obligatory in those cases. His analysis suggests that headshakes with different chin positions have different semantic interpretations. For example, headshake is used for negation. With the four different chin positions, there are in fact four types of headshake, and they each carry distinct meanings. In response to the question ‘Isn’t he married?’ (which carries a positive bias, that is, the speaker presupposes that ‘He is married’; Gökgöz & Wilbur (2017)), an answer with a headshake with the chin up means ‘He isn’t married’, which is a simple negation of the proposition ‘He is married’; a headshake with the chin down, in contrast, means ‘He is married’; a headshake with the chin forward yields ‘It is not (the case that) that he isn’t married’; and finally, a headshake with the chin back position means ‘negation of involvement’, that is, ‘I don’t know.’ Note especially that each claimed position is identified with a clear semantic meaning; while it may not be possible (yet) to identify the specific contribution of each chin and head position, these subtle semantic and pragmatic differences are traced systematically through his analysis, identifying an extensive and carefully marked set of messages that are likely to be missed by the untrained eye, such as a language learner or non-​native linguist. Ichida (2004, 2010) also documents the syntactic use of different head positions and movements for separating conditional clauses from purposive clauses, embedded (subordinate) clauses and relative clauses from simple clauses, and other such functions. He also argues against researchers who have erroneously claimed that JSL has no grammar and lacks complex sentences, or that it has the same grammar as Japanese because it has the same basic word order (subject-​object-​verb), by pointing out that these researchers have completely missed the role of the head. In essence, the head and other NMMs perform the functions of overt morphology in spoken languages, whether in the form of independent words or prefixes/​suffixes/​affixes, as well as some of the functions of intonation. Ignoring these markers underestimates the complexity and gives a false sense of what information is present. Ichida’s observations make it clear that there is much to the grammars of sign languages that has not yet been fully explored, and that the head is involved in ways that are much more intricate and regular than generally assumed.

24.5  Evaluation We began this discussion by adopting the notion that solutions to the puzzle of how best to describe NMMs should make correct predictions. Some questions that we might expect to see answered by a good solution include: 1 . Does the explanation correctly predict when the NMMs will appear? 2. Does the explanation correctly predict when the NMMs will not appear?

547

548

Ronnie B. Wilbur

3. Does the explanation correctly predict whether the absence of NMMs will be grammatical or not? 4. Does the explanation correctly predict what the NMMs will contribute to the meaning of the whole? 5. Does the explanation correctly predict that the associated meaning will not be present if the NMMs are not present unless an overt sign with that same meaning is present? For example, on this last question, if the associated meaning is present even without the NMMs because a manual sign is present, then the relationship between the manual sign and the NMMs is more likely lexical than syntactic, and would put those NMMs into a different category from others under consideration here. Likewise, for question 4, if we have the hypothesis that brow raise in a given language contributes ‘forward looking, prediction’, then we expect that we should always see brow raise when that meaning is intended (the positive condition), and we should not see that meaning when the brow raise is not present (the negative condition). We have seen that both of those conditions are not met for ASL. Furthermore, each sign language must be independently tested –​just because something is true of one sign language does not make it true of another. We have also seen that the chosen linguistic model makes a difference in how we evaluate each claim. For example, we saw that Sandler’s original argument that evidence for non-​isomorphism is evidence for the independence of prosody does not hold up on several counts. First, from an updated dynamic model of the prosody-​syntax interaction, non-​isomorphism is expected as a result of language-​specific syntax-​prosody interface constraints rather than a flaw in the grammatical system. Second, even if non-​ isomorphism is taken as support for an independent prosodic component, by itself that is not sufficient to show that NMMs are prosodic without recourse to other components of the grammar. Third, we would still need an explanation for why NMMs like brow raise in ASL have different spreading domains when compared to brow lowering for [+wh] or negative headshake for [+neg], and for when it is not present at all. Another argument offered for the idea that NMMs are prosodic in nature is that multiple NMMs can be combined, like musical notes to make a melodic tune. The most frequently cited example is the use of squint to indicate that that information is ‘shared knowledge’ in ISL (Dachkovsky & Sandler 2009). The compositional nature of NMMs (multiple pieces combining with each other) has long been known (going back at least as far as Baker-​Shenk (1983), and referred to as ‘layering’ in Wilbur (2000)), so by itself that is not a signature characteristic of a prosodic-​only or prosodic-​pragmatic analysis. Likewise, ‘brow raise’ has been suggested to mean ‘more to follow’. If brow raise is ‘forward looking’, how can its presence on restrictive relative clauses be explained? As discussed in Section 24.4, Wilbur (2011) argues that ‘br’ in ASL is the overt marking of the semantic restriction of dyadic [-​wh]-​operators, thus not licensed to spread over the c-​command domain. In all ASL constituents marked with ‘br’, the reading is restrictive, limiting the interpretation of the main clause/​nuclear scope (following Partee (1991)). Thus, it makes sense that restrictive relative clauses would carry the ‘br’ marking in ASL. If brow raise means ‘more to follow’, why isn’t the whole complex NP covered by ‘br’ rather than just the relative clause? At the same time, it is important to understand that although the system in ASL for use of ‘br’ is well-​studied, for other sign languages that display ‘br’ on relative clauses, their system of ‘br’ usage differs from that of ASL (Wilbur 2017). In fact, Kubus (2010,

548

549

Non-manual markers

2014)  reports that TİD relative clauses are marked with cheek raise and tensed upper lip and a squint similar to the ‘shared knowledge’ squint mentioned above. An optional ‘relativizer/​nominalizer’ marker includes raised eyebrows (‘br’) and optional open mouthing /​o/​. For Italian Sign Language (LIS) relative clauses, Branchini & Donati (2009) report the use of a complex marker consisting of both ‘br’ and ‘tensed eyes’ (the upper area of the face including eyes and cheeks, also possibly equivalent to ‘shared knowledge’ squint). They note that ‘br’ occurs in several syntactic environments, including yes/​no-​ questions, conditionals, topics, and focus constructions, and that ‘tensed eyes’ seems to be used only in extraposed constituents. A closer look at this squint behavior led Brunelli (2011) to conclude that in fact ‘tensed eyes’ is the primary marker of relative clauses (rather than brow raise, which only occurs on relative clauses moved to topic position). If Brunelli is correct about LIS ‘tensed eyes’, then this language would not parallel ASL NMMs in that the job performed by ‘br’ in ASL could instead be split between ‘br’ and ‘tensed eyes’ (and perhaps other NMMs). A  full set of tests needs to be run on each NMM in each sign language before any decision can be made. As an analogue to spoken language intonation, the prosodic approach according to which NMMs serve this function leaves much to be desired. In speech, the envelope of the intonation pattern is part of what holds a sentence together, distinguishing it from a mere list of words in sequence. Weast (2008) has identified some possible intonation-​like behavior in her careful measurements of brow height across sentences. But NMMs by themselves are not the signed analogue of this sentence envelope –​the movement of the hands carries this information.10 Mathematical analyses of sign movement, using motion capture and fractal analysis (Malaia & Wilbur 2012; Malaia, Wilbur, & Milković 2013; Malaia, Borneman, & Wilbur 2016; Malaia & Wilbur 2019), hold out hope for identifying a true intonation analogue and subsequent development of a PRAAT-​like program for sign languages. Beyond that, Bross & Hole (2017) have taken a bolder linguistic step in the direction of explaining NMMs as functionally not unique to sign languages. In particular, using DGS, they have identified a clear correlation with functional projections (phrases) in cartographic syntax (Cinque 1999, inter alia). Projections highest in the hierarchy are more likely to have wider/​higher scope and more likely to be produced with a “body part that can be ordered relative to other expressions on a vertical axis” (Bross & Hole 2017:  14). In short, when the semantic/​syntactic function is higher in the hierarchy, it is likely to be produced as NMMs and to use facial expression, head, or shoulders. When it is lower down, it is more likely to be conveyed by a manual sign.11 Given that the Cinque hierarchy has been constructed from extensive cross-​linguistic research, it provides a robust roadmap for investigating semantic/​syntactic functions of NMMs. In particular, the highest categories (speech act; evaluation; epistemic) are mapped to the upper face (eyebrows and eyes) in DGS (Bross & Hole 2017) and TİD (Karabüklü et al. 2018). Slightly lower categories (scalarity, e.g., ‘a lot’ with puffed cheeks) are on the lower face in DGS (Bross & Hole 2017; Bross 2018) and ASL (Wilbur & Nikolai 2019). It is quite clear that the earlier literature focusing on NMMs as adverbials and adjectivals has missed a major portion of the semantic contributions made by facial NMMs. There is much exciting cross-​linguistic typological work remaining to be done in this area. Again, it is hard to see how a prosodic approach to NMMs would be able to explain these NMM-​ functional projection correlations for further cross-​ linguistic testing. Furthermore, rather than treating combinations of head/​brow/​eye positions as surprising

549

550

Ronnie B. Wilbur

occurrences of pragmatic composition, Bross & Hole’s approach predicts which combinations of NMMs are more or less likely to occur and what they will mean. These predictions can be tested by standard linguistic and experimental techniques, to which we now turn.

24.6  Experimental perspectives In this section, we report the results from studies on the acquisition of NMMs (Section 24.6.1), on production effects on NMMs (Section 24.6.2), and on the processing of NMMs, including neurolinguistic reports (Section 24.6.3).12

24.6.1  Acquisition of NMMs Newport & Meier (1985) observed that grammaticalized facial expressions derive from a clear non-​linguistic system, from which they differ strikingly in organization. They considered NMMs as a unique form of bound morphology, with components (e.g., AUs) and significant scope, and noted that they play a “significant and often obligatory role” in distinguishing certain lexical items, adverbials, and syntactic structures. The research on the acquisition of NMMs tends to follow this viewpoint.

24.6.1.1  Earliest use of signs and face Anderson & Reilly (1998) address two types of NMMs, those that can be used for either affective communication or linguistic meaning (dual-​function) and those with only linguistic meaning (single function, in this case, NMM adverbs). Dual-​function NMMs can be things like the affective facial expression that conveys sadness, which is shown by signers and non-​signers alike. Despite the fact that newborn infants display the ‘cry face’ at birth, in order for them to acquire the linguistic NMMs for ‘sad’, they must learn the semantic concept, which they first express with the sign SAD, before learning to put the NMMs on that sign and possibly others in the same phrase (spreading). Thus, children do not learn the manual sign and NMMs together as a lexical item (i.e., not as a ‘gestalt’) but separately as components. Also, when these NMMs start to appear, they are nearly perfectly timed with respect to the manual sign, which contrasts with the developmental errors in using NMMs for more complex grammatical marking (Section 24.6.1.2). With respect to NMM adverbs (single function), children did not develop them until after they had already demonstrated use of the manual signs expressing the same underlying concept. As an example, children used the signs MORE and MANY at about 18 months of age, but the facial adverb ‘puff’ (cheeks) meaning ‘a lot’ was first observed at 23 months. Likewise, the sign WRONG was produced around 24 months but the NMM adverb ‘th’ was not observed until 29 months. The manual sign is learned before the corresponding NMMs; Anderson & Reilly indicate that this ‘hands before faces’ strategy can be taken as evidence that the NMMs they are investigating are bound morphology that is acquired later. In general then, for these types of NMMs (affective uses with signs, and NMM adverbs), the lexical sign (hands) is acquired before the bound NMMs (face), and the NMMs are in wide use by age three, although not all have been learned by that age. Their timing and scope with respect to the hands is nearly error free; however, this contrasts with NMMs used in complex syntactic structures, to which we now turn.

550

551

Non-manual markers

24.6.1.2  Grammaticalized NMMs for syntactic purposes Reilly, McIntire, & Bellugi (1990) studied the acquisition of conditional sentences by 14 deaf native ASL learners (ages 3;3–​8;4). Conditional sentences are complex, involving two clauses, the ‘if’-​clause (antecedent) and the main clause (consequent) (see (11) above for an example). The NMM components for conditionals include brow raise, head up and tilted, eye gaze change, and eye blinks (Baker & Padden 1978; Wilbur 1994a) with a head thrust on the last sign of the antecedent (Liddell 1986). In ASL, the antecedent has these specific conditional NMMs, whereas the main clause can be an assertion or question, with typical NMM for each. If the antecedent is (incorrectly) not marked by NMMs or a conditional sign (e.g., IF, SUPPOSE), then the whole construction can be interpreted as just two sentences next to each other rather than as a conditional. Furthermore, if the correct NMMs are present, the conditional sign can be omitted, thus making mature comprehension of conditionals dependent on interpreting the facial components and not just the manual sign. Reilly et  al. (1990) also observed a ‘hands first, face second’ progression in both comprehension and production, and report four stages in this development. Initially (around age 3), children produce two juxtaposed propositions (forerunners of antecedent and consequent) with no NMMs and no manual conditional signs. In the second stage, around 3;6, conditional signs begin to appear, and at about 3;10, some prosodic boundary marking NMMs, blinks and head nods, are observed marking the juncture between the two clauses. In the third stage, around ages 5–​6, some NMMs are used (about 75% of the time) but they generally are missing components (only complete 20% of time) and only occur with the manual conditional sign; that is, the spreading domain of the NMMs over the antecedent has not yet been learned. The fourth stage, starting around 7;6, shows the addition of other NMM components, such as the head thrust on the last sign and the spreading of the NMMs over the whole antecedent. Full mastery is reported by 8 years of age. Reilly et al. conclude that children’s acquisition of ASL conditionals follows the general developmental trend of preference for free morphemes (lexical signs) before bound morphemes (NMMs), and that once the analysis of syntactic structures has started, children proceed to acquire NMMs rapidly in a “patterned, and linguistically driven manner” (see Gökgöz, Chapter 12, for acquisition of the negative headshake). Finally, Reilly et al. (1990) consider and reject the suggestion that NMMs are similar to intonation in spoken languages. They note similarities, such as co-​occurrence with segmental linguistic information (words and signs) and playing multiple roles (e.g., affective, grammatical). However, they argue that affective NMMs are qualitatively different from grammatical NMMs. In particular, as Baker-​Shenk (1983) documented, grammatical NMMs are highly constrained (e.g., sharp onset/​offset and coordination with respect to signs) whereas affective NMMs are inconsistent. Viewing the NMMs as bound morphology (those needing to occur over signs, but not necessarily bound to a particular sign) provides a better explanation of the observed acquisition stages. These acquisition sequences provide strong support for differentiating grammatical NMMs in sign languages from the facial gestures that hearing people normally make during spoken conversations, that is, that grammatical NMMs are not just borrowed from speech production but require specific learning of correct form and domain, which is sufficiently difficult that children take years to accomplish the full task.

551

552

Ronnie B. Wilbur

24.6.2  NMMs and sign production Several approaches can be taken to looking at NMMs as used by adults. One focuses on fluent users, another on learners.

24.6.2.1  Trying to speak and sign at the same time In comparison to what we see with children, we see other evidence of the grammatical nature of NMMs when adults who are teaching young deaf children try to speak and sign at the same time (which is not signing ASL and speaking English, which is theoretically impossible given only one brain per person; rather it is signed English accompanying spoken English; cf. Wilbur & Petersen (1998)). Theoretically, simultaneous speaking and signing involves the same number of words in each modality because they both are coding English (or whatever spoken language is the target). However, numerous mismatches arise in the number of words because Signed English (SE) often requires separate signs for English suffixes (plural –​s, past tense –​ed, etc.). A one-​syllable spoken word (e.g., cats) requires two signs in SE (e.g., CAT + plural). Hence, the one syllable for ‘cats’ is matched by two full sign productions. These mismatches affect larger units of combined speech and signing (narratives, classroom storytelling, or lectures). In a study of two groups of hearing signers, those who learned ASL from birth from deaf parents (children of deaf adults, CODAs) and those who do not know ASL but who have learned to use SE for daily use with their own children or in classrooms (simultaneous communicators, SIMCOMs) for at least 10 years, significant differences in the use of NMMs, rate of production, and sign omissions were observed in three experimental conditions (Wilbur & Petersen 1998). The two groups were presented with short narratives which they were asked to (i) speak English only (speech-​alone), (ii) sign English only (SE-​alone), and (iii) speak and sign English at the same time (SC: speech-​combined, SE-​combined). When speech-​combined is compared to speech-​alone, speech duration is increased, due to longer syllable durations, greater number of gaps between syllables, and longer gap duration. In short, speaking while signing elongates speech. In contrast, when SE-​combined is compared to SE-​alone, sentence duration is shorter, with shorter sign duration, fewer gaps between syllables, and shorter gaps. That is, in order to sign and speak at the same time, speech must be slowed down, and signing must be speeded up. This leads to a large number of omissions of signs in the SE-​combined condition compared to SE-​alone. Here is where knowledge of ASL and NMMs enters the picture. CODAs omitted signs that could be compensated for with use of spatial agreement (instead of separate pronoun signs, verb agreement is produced, reducing number of signs while still carrying the same grammatical information), using reduplication on verbs instead of separate aspect and adverb signs, and so on. As for NMMs, in 40 SE occurrences, CODAs marked 90% of wh-​words and -​clauses with appropriate brow position for wh-​interrogatives (lowering) and did not omit any wh-​signs. In SE-​combined, they marked 65% and omitted only one wh-​sign. The SIMCOM group, not knowing ASL, marked only 15% in SE-​alone and 13% in SE-​combined, and deleted 13% and 8% of the wh-​signs, respectively. Likewise, nearly all (47/​48) SE productions of yes/​no-​questions by CODAs had a brow raise. In contrast, only 9 out of 48 SE productions by SIMCOMs had brow raise. Another 26 (of 48) productions of yes/​no-​questions were incorrectly marked with brow lowering. Thus, 81% of the yes/​no-​questions produced by the SIMCOM group did not 552

553

Non-manual markers

have ASL-​appropriate NMMs and were not environments for permissible deletion of the question markers at the end of the question. Also, CODAs marked half of their negative questions with negative headshake, while the SIMCOM group did not mark any of their negative questions with negative headshake. For negative statements, the CODA group marked 30/​32 with negative headshakes, whereas the SIMCOM group marked only 19/​32. Moreover, for the SIMCOM group, even when the negative headshake was present, it scoped only over the negative sign and occasionally over the verb, but not over the whole verb phrase or sentence, as would have been appropriate in ASL. The SIMCOM negative headshake pattern was not like the pattern for ASL, but it was also not like the pattern shown by hearing non-​signers when speaking English. In summary, the NMMs patterns seen in ASL are not simply borrowings from what hearing people do during ordinary speech, nor are they predictable facial gestures that require no specialized learning to use. While they may be derived from hearing facial gestures (Benítez-​Quiroz, Wilbur, & Martínez 2016), they are not merely transferred from hearing persons’ spoken conversations. Finally, it is abundantly clear from the results of this study that signing and speaking English at the same time for anything other than single sentences as demonstrations comes at the expense of both the speech, which is distorted, and the signing, which is both neglected by massive omissions and in danger of being incomprehensible.

24.6.2.2  Signing rate effects on NMMs In a series of studies, Grosjean (1977, 1978, 1979; also Grosjean & Deschamps 1975; Grosjean & Lane 1977; Grosjean & Collins 1979) reported similarities and differences between speaking and signing on performance variables such as pausing and rate of signing. Hearing subjects were asked to read a 116-​word passage at a normal rate, and then at four other rates. Speech duration, breathing pauses, and non-​breathing pauses were measured with oscillographic recordings of speech and rib-​cage movement (Goldman-​ Eisler 1968). The same passage was translated into ASL for 5 signers to produce 4 times each at 5 different rates. Parallel to studies of pausing and syntax in spoken languages, Grosjean and colleagues report that for speech, pauses with durations of greater than 445 ms occurred at sentence boundaries; pauses between 245 and 445 ms occurred between conjoined sentences, between noun phrases and verb phrases, and between a complement and the following noun phrase; and pauses of less than 245 ms occurred within phrasal constituents. The ASL findings (Grosjean & Lane 1977) were that pauses between sentences had a mean duration of 229 ms, pauses between conjoined sentences had a mean duration of 134 ms, pauses between NP and VP had a mean duration of 106 ms, pauses within the NP had a mean duration of 6 ms, and pauses within the VP had a mean duration of 11 ms. These results clearly show that sign sentences are organized hierarchically with internal constituents: the longest pauses are between sentences; shorter pauses occur between constituent sentences analyzed as parts of a conjoined sentence; and the shortest pauses appeared between sentence-​internal constituents. The pausing (prosody) reflects the syntax, as Selkirk predicts, and this mirrors the results reported by Churng (2009). Wilbur (2009) conducted a similar study to determine rate effects on both signs and NMMs. When changing signing rate, signers adjusted sign duration, number of pauses, and pause duration, in accordance with Grosjean’s previous observations. Similarly, changes in signing rate affected the number of NMMs but importantly not the signs that are covered by them. Here are some details to show how this can happen. 553

554

Ronnie B. Wilbur

Signers produced three significantly different signing rates (fast, normal, slow), as calculated in signs per second. The number of signs was consistent across four trials of each of three stories. Rate was a main effect on sign duration, pause number, and pause duration, as well as on NMM variables:  brow raise duration, lowered brow duration, number and duration of eyeblinks, and number of head nods. This indicates that signers produced their different signing rates without making dramatic changes in the number of signs, but instead by varying the sign duration (this finding contrasts with the results reported by Wilbur & Petersen (1998) of massive sign omission when experienced signers speed up signing to try to speak and sign at the same time). With respect to NMMs, increased signing rate decreased the duration of three measured NMMs: brow raise, lowered brows, and eyeblink. The number of blinks also decreased with increased signing rate. At faster signing rates, signers do not bother to lower the brows between phrases, so instead of, for instance, three separate brow raises, only one (longer) raise may be observed, yet all the signs that are supposed to be in the brow raise domain are still covered by brow raise. That is, if we simply count brow raises, there are fewer in faster signing than in normal or slow signing –​and the same is true for other NMMs. But brow raises are present where they are predicted to be by Wilbur’s (2011) analysis. This study observed a split in the behavior of the NMM ‘lowered brows’, which is relevant to our understanding of NMMs. Some of the lowered brows are licensed by wh-​ questions; these lowered brows decreased in duration with increased signing rate, as just mentioned. However, the data also contained lowered brows that were not associated with wh-​questions but instead with affective material. These affective lowered brows occurred with descriptions of someone struggling to learn to knit, on signs such as STRUGGLE, BEAR-​DOWN, and others in the story. The presence or absence of lowered brows is not distinctive (not minimal pairs). The number of such lowered brows and their duration did not significantly vary with signing rate, and their duration was about twice that of wh-​ question brow lowering. In sum, the grammatically licensed lowered brows are integrated into the prosodic/​rhythmic structure and are modified along with other such variables as signing rate changes (this reflects the syntax/​prosody interface). Those lowered brows that are not grammatically licensed operate independently of the other prosodic adjustments being made. While the distinction between grammatical and affective facial expressions has long been identified, this is the first study to show differential behavior specifically with respect to prosody. Wilbur (2009) argued that this pattern suggests that signed prosody, like that of speech, is characterized by an integrated, multivariate system of units that can be regulated in production across different contexts in which the system might be stressed to a greater or lesser degree as a result of changes in signing rate. Thus, there is no one-​to-​one mapping of syntax to prosodic constituents in sign languages (or spoken languages, like my New Yorker question). The existence of non-​isomorphy in linguistically meaningful NMMs is experimentally demonstrated to be related to production rate (among other variables), whereas affective facial expression is not.

24.6.2.3  Motion capture of NMM Given the current state of equipment for motion capture, which requires some form of emitting or reflecting marker to be placed on the signer, few NMMs have been studied –​in general, how natural could the NMMs be if the signer is wearing markers in sensitive 554

555

Non-manual markers

areas like the eyelids, eyebrows, lip corners, etc.? Two NMMs that have received some attention are the head and the non-​dominant hand. Looking first at the head, Puupponen et  al. (2015) analyzed four types of head movement in Finnish Sign Language (FinSL): nods, nodding, thrusts, and pulls. They suggested the following linguistic functions for these: (i) nods give affirmation and positive feedback; (ii) nodding provides positive feedback or echo phonology; (iii) thrusts occur in interrogatives and emphasis; and (iv) pulls occur for emphasis, contrast, and (semantic) exclusion. They used motion capture techniques to study the path movement of the head during these articulations, and report that each has a distinctive trajectory (but they only report the sagittal plane). Nods have a V-​shape, indicating an abrupt change of direction (Figure 24.3a); thrust has more of a U-​shape (Figure 24.3b); pull has an ‘upside-​down J-​shape’ (Figure 24.3c); and nodding has a damped, oscillating shape (not shown; trajectory is repetition of nod). Thus, the shape of the head movement trajectory is a carrier of information that is independent of the information carried on the hands (see also discussion in Malaia, Borneman, & Wilbur (2018)). They provide a fascinating glimpse of the kinds of information that is obtainable from each articulator with specialty software. However, further analysis of the function of each movement within FinSL is needed before we can understand their place in the syntax-​prosody debate.

Figure 24.3  Movement trajectories for the head: (a) nod; (b) thrust; (c) pull (Puupponen et al. 2015: 58–​61, Figures 6, 7, and 9; © John Benjamins, reprinted with permission)

As mentioned earlier, Ichida (2010) made a strong case for the importance of the head in distinguishing sentence types in JSL, arguing that to overlook the role of the head was to miss most of the grammar completely. Following up on Puupponen et al. (2015) and Ichida (2010), Malaia & Wilbur (2018) analyzed motion capture data of ASL for the head, dominant hand (DH), and non-​dominant hand (NDH), calculating the information that each contributes to the complexity of the signed signal (using fractal complexity as a measure, beyond the scope of this chapter). As expected, the DH carried significantly more information than the NDH and the head, but the NDH and the head did not differ significantly from each other. Rather there are situations for both in which their role is redundant with the DH (reducing complexity, because their role is predictable, as in two-​ handed signs with any type of symmetry, or echo movement of the head emphasizing the DH movement) and also situations in which they act independently of the DH. These independent contributions increase the signed signal complexity; some examples are those discussed by Ichida (2010) when the head is used to provide information about the 555

556

Ronnie B. Wilbur

sentence type, or the thrusts described by Puupponen et al. (2015) used for interrogatives and the head pulls used for semantic exclusion. Here the head is providing additional grammatical information that is not redundant with what the hands provide. We look next at the use of the NDH as a marker of the phonological phrase, because Sandler (2010) uses the behavior of the NDH as one of the arguments to support her claim that NMMs are prosodic. In her discussion of the rule of Non-​dominant Hand Spread (NHS), which can be observed when a two-​handed sign is made and then the NDH is held while the DH continues to make other signs, Sandler argues that the spread continues up to the ‘phonological phrase boundary’ in the corpus elicited by Nespor & Sandler (1999). It is clear, however, from the motion capture data of both elicited sentences and longer narratives used in Malaia & Wilbur (2018), that the spreading domain may, in fact, continue over multiple sentences. In fact, in his discussion of the domain of NHS, Crasborn (2011) also observed longer spans and suggests that such usage might be evidence for the existence of a ‘discourse phrase’ in which multiple units are joined by ‘prosodic cohesion’. Viewing the situation from an information transfer perspective, Malaia & Wilbur (2018) suggest instead that the specific function of NHS is more likely to be semantic cohesion because it is often the case that the NDH marker refers to an individual, object, or location in the discourse, and keeps this referent present in the discourse, conceivably as an aid to the viewer’s memory when the syntax is complex or as a strengthening mechanism if multiple referents are involved. Here, we have two potential explanations for the same phenomenon –​a prosodic ‘discourse phrase’ and a semantic reference marker; they are not incompatible with each other and could both be true at the same time. I will return to this point at the conclusion of this chapter.

24.6.3  Perception of NMMs 24.6.3.1  Eye-​tracking of NMMs Thompson et  al. (2006) tested Neidle et  al.’s (2000) proposal that eye gaze is a non-​ manual agreement marker of syntactic objects using an eye-​tracking study with 10 native ASL signers. They coded eye gaze for transitive and intransitive verbs into five categories: addressee, subject, object, locative, and other. Their results showed that Neidle et al.’s claim seems to hold only for (traditionally defined) agreeing verbs, marking the object of regular transitive verbs (98.4%) and backwards verbs (82.5%). For spatial verbs, which take source and goal type arguments, they observed that the eye gaze was directed most often to the location of the locative argument, which typically also happens to be the subject, but that the gaze was directed lower in the signing space than the gaze height observed with typical non-​spatial arguments. They summarized their findings as follows: with agreeing verbs, gaze marks the object (direct object for transitives, indirect object for ditransitives), and with spatial verbs gaze marks the locative. The study confirmed that eye gaze towards the object starts before the onset of the verb sign (about 160 ms) and may accompany object NPs which appear before or after the verb. Critically, with ‘plain’ verbs, eye gaze direction toward object location averaged only 11.1%. In line with Neidle et al. (2000), Thompson et al. (2006) interpreted eye gaze as a marker for syntactic agreement. However, in contrast to Neidle et al., they do not interpret eye gaze as an agreement marker independent of manual agreement. If this were the case, eye gaze agreement marking should also be observed in sentences with plain verbs. 556

557

Non-manual markers

Instead, Thompson et al. interpreted manual and non-​manual (eye gaze) agreement as two parts of one morpheme. Thompson et al. also considered other possible functions of eye gaze, such as regulating turn taking in discourse, checking addressee comprehension, marking role shift, focus marker, and point of view (Baker 1977; Baker & Padden 1978; Baker-​Shenk 1983, 1985; Padden 1983). While such functions of eye gaze may be used in other contexts, eye gaze does not serve these functions with agreeing and spatial verbs.

24.6.3.2  Neural processing of NMMs Neurological investigations of how NMMs are processed have reported differences between grammatical NMMs and affective facial expressions in native signers. From lesion studies, evidence indicates that grammatical NMMs are processed predominantly in the left hemisphere, whereas affective facial expressions are processed in the right (Kegl & Poizner 1997; Corina, Bellugi, & Reilly 1999). What this means is that if a signer has a lesion on the left side of the brain, processing of grammatical NMMs is impaired, whereas if the lesion is on the right side, affective facial expression processing is impaired. Likewise, studies using neurally healthy Deaf signers and hearing non-​signers show a difference between grammatical and affective facial use for signers but not for non-​signers (McCullough, Emmorey, & Sereno 2005). For signers, grammatical NMMs activated the left superior temporal sulcus (STS), whereas affective facial expressions produced bilateral activation in STS. In contrast, hearing non-​signers showed right hemispheric activation for the affective facial expressions. Deaf signers also showed activation in left fusiform gyrus for both types of facial expressions, whereas hearing non-​signers showed bilateral activation in this area. In a separate lesion study, described in detail in Gökgöz (Chapter  12), Atkinson et al. (2004) report data on the NMMs of negation in British Sign Language (BSL). They report that, contrary to what might be expected, signers with left hemisphere (LH) lesions are able to use facial information to understand negation, even when no manual negative sign is present. This would seem to contradict the prediction that damage in the LH should disrupt grammatical NMMs processing. They initially conclude that the negative NMM is not syntactic after all, but is rather prosodic. However, they cannot rule out a third possibility, namely that the LH damaged signers are using affective negative facial expression (described recently in Benítez-​Quiroz et al. (2016)). As the acquisition data shows, access to such non-​linguistic affective facial expressions is universally available at an early age, prior to language learning. Such an interpretation is supported by Henner, Fish, & Hoffmeister (2016), who reported judgments of 55 native Deaf signers with respect to the interaction of conditional marking (brow raise) and negation (headshake, lowered eyebrows). The standard expectation of grammaticality is that the conditional clause will have brow raise even if it contains a negative (‘If you don’t work, you can’t buy new clothes’). They found that nearly 60% of their subjects chose as correct a form which did not have brow raise, instead marking the negative (i.e., lowered) eyebrows over the conditional and the negative headshake over the entire two clause structure. It may be that negation is so basic that it can take primacy of marking in unexpected places. These data highlight again that looking for a single, simple answer to the behavior of NMMs, even upper face and head as proposed by Sandler (2010), is not likely to be successful.

557

558

Ronnie B. Wilbur

24.7  Summary and conclusion In the end, how do we answer the question ‘What linguistic status should we ascribe to NMMs?’ As a starter, it should be clear that those that are not affective face or body gestures or mouthing are morphemes. If we use the definitions provided by SIL,13 we can talk about roots, stems, affixes, and clitics as possible morpheme options, and there is no reason to expect that NMMs are all the same type of morpheme. To determine where a particular NMM fits, clear answers to the following questions are needed: 1. In a given sign language, is there a clear correspondence between a particular NMM form and a particular meaning/​ function? The meanings/​ functions will determine whether the NMMs are present because of semantic, pragmatic, syntactic, morphological, phonological/​ prosodic, information structure, and/​ or discourse structure purposes. Conclusive testing is necessary to avoid misleading guesses such as Coulter’s assumptions about brow raise. To be clear, conclusive testing means that we need more than just a description that seems to fit the data –​we need evidence that the proposed solution makes correct predictions, that the NMMs must be present to get the proper interpretation, and cannot be present if that interpretation is not intended. That is, we need not only positive example of the NMMs where they are supposed to be but also negative (ungrammatical) examples of NMMs missing from where they should be, or being replaced by other NMMs yielding a different meaning or structure. It is not enough to look at some examples and describe what is there (as Coulter did). Testing must go beyond that first step. 2. Is the form free or bound phonologically? Free or bound syntactically? SIL distinguishes between clitics, which are both phonologically and syntactically bound, and various types of affixes, which are always bound (to a root [simple] or a stem [complex]). If the form is associated with an operator, the type of operator will determine what the spreading domain should be. How is the form affected by signing rate increases and decreases? By embedding into a main clause? The argument is that a separation must be made between why a particular, for instance, head movement (or other non-​manual) is present in the sentence (the message it conveys) and how the movement or other articulation behaves with respect to other articulators over the time course. We expect the entire production to be temporally coordinated to enable the signer to produce it with ease (Wilbur 2018) and the viewer to visually receive and interpret it with ease (these are prosodically visible modifications). Thus, finding that the beginnings and ends (boundaries) are coordinated with various phrasal sizes (phonological, clausal, sentential) says much about the motor and visual system but not much about the linguistic system itself (again, my New Yorker question is a good example of this distinction). Correlation of NMMs with prosodic alignment does not entail causation –​that is, the prosody does not put the NMMs there (except for prosodic markers such as eyeblinks and some head nods (Wilbur 1994a)), rather it makes the NMMs conform to the prosodic phrasing. Sandler (2010) explicitly claims that the head and upper face contribute intonational information to the signed signal. This observation does not explain why or how the head and upper face obtain the information they contribute. The cumulative data from the studies reviewed here show that the head and upper face are involved with clause typing (Ichida), CP-​ level functional projections (Bross & Hole), left-​peripheral movements 558

559

Non-manual markers

(Churng), and restrictions on dyadic operators (my own work). That Sandler and others find prosodic alignment of head and upper face with intonational phrases is not an accident –​all of these syntactic and semantic functions are at least phrasal (if not clausal) in structure. As I said at the beginning of this chapter, I have argued that there is a close relationship between clausal structure and intonational phrases, but this is not the reason for the presence of all of these NMMs (Wilbur 1994a). Let us do a final review by comparing the NMMs discussed here with eyeblinks, which seem to be plain prosodic boundary markers (Wilbur 1994a). Looking at the function of phrasal boundary eyeblinks, I noted that there are four possible functions: (i) mark syntactic phrases, (ii) mark prosodic phrases, (iii) mark discourse units, and (iv) mark narrative units. We can predict the location where eyeblinks will occur with above 90% accuracy because the algorithm for eyeblink placement is sensitive to syntactic constituency, and hence intonational phrases. The remaining two functions (discourse unit marking and narrative unit marking) are built on intonational phrases, because their temporal ends are, of necessity, the ends of prosodic constituents and therefore likely to be intonational phrases. I concluded then, as I do now, that investigations into prosodic structure can provide insight into both sentence-​level syntactic structure and larger contextual-​level pragmatic and metrical structure. My mind has not changed about this. What has changed is my willingness to accept head and upper face NMMs other than eyeblinks as intonational, implying that they do not reflect broader syntactic and semantic functions first and foremost. The issue of the NMM parallel with intonation has also been explicitly addressed and rejected by Ichida (2010) and Reilly, McIntire, & Bellugi (1990), among others. Further, the work of Bross & Hole, and Karabüklü et al., documents that lower face NMMs are not just adverbials and mouthing, but also functional operator domains with scopes that must be marked. Thus, even lower face NMMs contribute to the broader syntactic and semantic structure of the signed utterance. Taken together, we agree with Pfau & Quer (2010) when they say: “Non-​manuals may play a role at all levels of linguistic description.” If they were simpler, this chapter would have been much shorter.

Notes 1 These codings represent individual or combined action units of the facial action coding system (FACS), defined by Ekman et  al. (2002) and used to describe particular muscle actions on the face. 2 Herrmann (2013: 170) notes Selkirk’s updated (2011) model but does not apply it to her discussion of syntactic vs. prosodic analysis of NMMs. Likewise, she notes the projections in Cinque’s (1999) tree where various modal marking NMMs could originate, but concludes that “the general scope behavior neither favors nor conflicts with a syntactic analysis” and that therefore it “seems promising to take into account a pragmatic viewpoint as proposed in the prosodic approach” (cf. Herrmann 2013: 167–​168). 3 Interested readers are referred to Scheer (2010) or Ramchand & Reiss (2007). 4 A further benefit of Match Theory is that it provides a better fit with syntactic phase theory (Chomsky 2001; Kratzer & Selkirk 2007) as well as an explanation for the need for recursivity in prosodic constituent representations, which Strict Layering prohibits (Ito & Mester 2009). 5 ASL is a discourse-​configurational language (tracking new and old information that leads to structural changes) (Wilbur 2018). New information introduced into discourse as focus in one sentence becomes old in the next sentence. Old information may be omitted (pro-​drop), used in pronoun form, or otherwise put in background constructions. 6 Lackner (2017) is also semantic in perspective, but she does not engage in the question of how to analyze NMMs grammatically.

559

560

Ronnie B. Wilbur 7 The standard definition of c(onstituent)-​command is that X c-​commands Y if (a) X does not dominate Y, (b) Y does not dominate X, and (c) the lowest branching node that dominates X also dominates Y (https://​en.wikipedia.org/​wiki/​C-​command). 8 The only time a brow raise appears to spread over a c-​ command domain is in yes/​ no-​ interrogatives; Wilbur & Patschke (1999) argue that this is only apparent because the question marker (wiggle index finger) is optional (and likely borrowed) in ASL. In any case, it makes sense that the signs in the question are inside the restriction of the interrogative operator and therefore are covered by brow raise. 9 Caponigro & Davidson (2011) analyze these as question-​answer clauses (QACs), essentially arguing that they are copular structure declaratives composed of an embedded question followed by an embedded answer, with a topic-​comment like function (see also Davidson, Caponigro, & Mayberry 2008). Because their main concern is on the theoretical implications of the QACs, they do not systematically present tests for their structural claims. The most relevant issue to the current chapter is that their analysis provides no explanation for why the embedded content question (being a wh-​question) has brow raise on it and not brow lowering. If, however, brow raise occurs on the restriction of [-​wh] operators, then the analysis of the wh-​clause as the backgrounded restriction of the focus operator that applies to the ‘answer’ is completely consistent with other uses of brow raise in ASL. Hauser (2018) provides a bridge analysis from the position taken by Caponigro & Davidson to the analysis offered in Wilbur (1996) by pursuing a grammaticalization path (as suggested in Kimmelman & Vink 2017), showing an intermediate stage in French Sign Language (LSF). 10 See discussion of jerk (third motion derivative) in Wilbur (1990: 97–​98), Wilbur & Martinez (2002), and, for the fourth derivative, Martell (2005). 11 Another part of the Bross & Hole (2017) model notes further correlations. Those signs in the intermediate part of the hierarchy are ordered with left-​to-​right scope sequencing (two signs, one representing a projection higher than the other, will be ordered higher one first, lower one second), whereas within lower portions of the hierarchy, the ordering is right-​to-​left (giving sentence-​final operator signs higher scope than non-​final ones). Testing of this portion of the model in TİD, which is like DGS in being an SOV language, suggests that the switch from left-​ to-​right sequencing to right-​to-​left sequencing is language specific, as TİD only allows right-​to-​ left sequencing (Karabüklü et al. 2018). 12 I remind the reader that certain non-​manuals have been excluded from discussion here. As for experimental investigations on possible lexical non-​manuals, see Giustolisi, Mereghetti, & Cecchetto (2017) on mouthings and Pendzich (2017) on other lexical non-​manuals, among others. 13 https://​glossary.sil.org/​term

References Aarons, Debra. 1994. Aspects of the syntax of ASL. Boston, MA:  Boston University PhD dissertation. Anderson, Diane & Judith Reilly. 1998. Pah! The acquisition of adverbials in ASL. Sign Language & Linguistics 1(2). 117–​142. Atkinson, Jo, Ruth Campbell, Jane Marshall, Alice Thacker, & Bencie Woll. 2004. Understanding ‘not’: neuropsychological dissociations between hand and head markers of negation in BSL. Neuropsychologia 42(2). 214–​229. Bahan, Benjamin. 1996. Non-​ manual realization of agreement in American Sign Language. Boston, MA: Boston University PhD dissertation. Baker, Charlotte. 1977. Regulators and turn-​taking in ASL discourse. In Lynn Friedman (ed.), On the other hand: New perspectives on American Sign Language, 215–​236. New York: Academic Press. Baker, Charlotte & Carol Padden. 1978. Focusing on the nonmanual components of American Sign Language. In Patricia Siple (ed.), Understanding language through sign language research, 27–​57. New York, NY: Academic Press. Baker-​Shenk, Charlotte. 1983. A micro-​analysis of the non-​manual components of questions in American Sign Language. Berkeley, CA: University of California PhD dissertation.

560

561

Non-manual markers Baker-​Shenk, Charlotte. 1985. Nonmanual behaviors in sign languages: Methodological concerns and recent findings. In William Stokoe & Virginia Volterra (eds.), SLR ‘83:  Proceedings of the Third International Symposium on Sign Language Research, 175–​184. Silver Spring, MD: Linstok Press. Benítez-​Quiroz, Fabian, Ronnie B. Wilbur, & Aleix Martínez. 2016. The Not Face: A grammaticalization of facial expressions of emotion. Cognition 150. 77–​84. Branchini, Chiara & Caterina Donati. 2009. Italian Sign Language relatives: A contribution to the typology of relativization strategies. In Anikó Liptak (ed.), Correlatives: theory and typology, 157–​191. Amsterdam: Elsevier. Bross, Fabian. 2018. The clausal syntax of German Sign Language:  A cartographic approach. Stuttgart: University of Stuttgart PhD dissertation. Bross, Fabian & Daniel Hole. 2017. Scope-​taking strategies and the order of clausal categories in German Sign Language. Glossa 2(1). 1–​30. Brunelli, Michele. 2011. Antisymmetry and sign languages: A comparison between NGT and LIS. Amsterdam: University of Amsterdam PhD dissertation. Caponigro, Ivano & Kathryn Davidson. 2011. Ask, and tell as well:  Question-​answer clauses in American Sign Language. Natural Language Semantics 19. 323–​371. Carlson, Gregory & F. Jeffrey Pelletier (eds.). 1995. The generic book. Chicago:  Chicago University Press Chierchia, Gennaro. 1995. Dynamics of meaning:  Anaphora, presupposition, and the theory of grammar. Chicago: Chicago University Press. Chomsky, Noam. 2001. Derivation by phase. In Michael Kenstowicz (ed.), Ken Hale: A life in language, 1–​52. Cambridge, MA: MIT Press. Churng, Sarah. 2011. Syntax and prosodic consequences in ASL:  Evidence from multiple WH-​ questions. Sign Language & Linguistics 14(1). 9–​48. Cinque, Guglielmo. 1999. Adverbs and functional heads. A cross-​linguistic perspective. Oxford: Oxford University Press. Corina, David, Ursula Bellugi, & Judith Reilly. 1999. Neuropsychological studies of linguistic and affective facial expressions in deaf signers. Language and Speech 42. 307–​331. Coulter, Geoffrey. 1978. Raised eyebrows and wrinkled noses: The grammatical function of facial expression in relative clauses and related constructions. In Frank Caccamise & Doin Hicks (eds.), ASL in a bilingual, bicultural context. Proceedings of the Second National Symposium on Sign Language Research and Teaching, 65–​74. Coronado, CA: NAD. Crasborn, Onno. 2011. Are left and right like high and low? Talk presented at the University of Göttingen, October 15, 2011. Dachkovsky, Svetlana & Wendy Sandler. 2009. Visual intonation in the prosody of a sign language. Language and Speech 52(2/​3). 287–​314. Davidson, Kathryn, Ivan Caponigro, & Rachel Mayberry. 2008. The semantics and pragmatics of clausal question-​answer pairs in ASL. In Tova Friedman & Satoshi Ito (eds.), SALT XVIII, 212–​229. Ithaca, NY: Cornell University. Ekman, Paul, Wallace V. Friesen, & Joseph C. Hager. 2002. Facial Action Coding System. The manual. Salt Lake City, UT:  Research Nexus Division of Network Information Research Corporation. Elliott, Eeva. 2013. Phonological functions of facial movements:  Evidence from deaf users of German Sign Language. Berlin: Freie Universität Berlin PhD dissertation. Engberg-​Pedersen, Elisabeth. 1990. Pragmatics of nonmanual behavior in Danish Sign Language. In William Edmondson & Fred Karlsson (eds.), SLR’87: Papers from the Fourth International Symposium on Sign Language Research, 121–​128. Hamburg: Signum-​Press. Giustolisi, Beatrice, Emiliano Mereghetti, & Carlo Cecchetto. 2017. Phonological blending or code mixing? Why mouthing is not a core component of sign language grammar. Natural Language and Linguistic Theory 35. 347–​365. Gökgöz, Kadir & Ronnie B. Wilbur. 2017. Positive bias in negative yes/​no questions:  Evidence for Neg-​to-​C in TİD. In Paweł Rutkowski (ed.). Different faces of sign language research. Warsaw: University of Warsaw, Faculty of Polish Studies. Göksel, Aslı & Meltem Kelepir. 2013. The phonological and semantic bifurcation of the functions of an articulator: HEAD in questions in Turkish Sign Language. Sign Language & Linguistics 16(1).  1–​30.

561

562

Ronnie B. Wilbur Goldman-​Eisler, Frieda. 1968. Psycholinguistics:  Experiments in spontaneous speech. New  York: Academic Press. Grosjean, François. 1977. The perception of rate in spoken and sign languages. Perception and Psychophysics 22. 408–​413. Grosjean, François. 1978. Crosslinguistic research in the perception and production of English and American Sign Language. Paper presented at the National Symposium on Sign Language Research and Teaching, Coronado, CA. Grosjean, François. 1979. A study of timing in a manual and a spoken language: American Sign Language and English. Journal of Psycholinguistic Research 8. 379–​405. Grosjean, François & Maryann Collins. 1979. Breathing, pausing, and reading. Phonetica 36. 98–​114. Grosjean, François & Alain Deschamps. 1975. Analyse contrastive des variables temporelles de l’anglais et de français:  Vitesse de parole et variables composantes, phenomenes d’hesitation [Contrastive analysis of temporal variables in English and French:  Speech rate and related variables, hesitation phenomena]. Phonetica 31. 144–​184. Grosjean, François & Harlan Lane. 1977. Pauses and syntax in American Sign Language. Cognition 5. 101–​117. Hauser, Charlotte 2018. Question-​answer pairs:  The help of LSF. FEAST 2. 44–​55. Available at: www.raco.cat/​index.php/​FEAST. Henner, Jon, Sarah Fish, & Robert Hoffmeister. 2016. Non-​manual correlates for negation trump those for conditionals in American Sign Language (ASL). Poster presented at Theoretical Issues in Sign Language Research 12, Melbourne. Herrmann, Annika. 2013. Modal and focus particles in sign languages. A  cross-​linguistic study. Berlin: De Gruyter Mouton. Ichida, Yasuhiro. 2004. Head movement and head position in Japanese Sign Language. Poster presented at Theoretical Issues in Sign Language Research (TISLR8), 30 September–​2 October, 2004, Barcelona. Ichida, Yasuhiro. 2010. Introduction to Japanese Sign Language: Iconicity in language. Studies in Language Sciences 9. 3–​32. Ito, Junko & Armin Mester. 2009. Trimming the prosodic hierarchy. In Toni Borowsky, Shigeto Kawahara, Takahito Shinya, & Mariko Sugahara (eds.), Prosody matters: Essays in honor of Elisabeth Selkirk. London: Equinox Publishers. Karabüklü, Serpil, Fabian Bross, Ronnie B. Wilbur, & Daniel Hole. 2018. Modal signs and scope relations in TİD. FEAST 2. 82–​92. Available at: www.raco.cat/​index.php/​FEAST Katz, Jonah & Elisabeth Selkirk. 2011. Contrastive focus vs. discourse-​new: Evidence from phonetic prominence in English. Language 87(4). 771–​816. Kegl, Judy A. & Howard Poizner. 1997. Crosslinguistic/​crossmodal syntactic consequence of left hemisphere damage: Evidence from an aphasic signer and his identical twin. Aphasiology 11.  1–​37. Kegl, Judy A. & Ronnie B. Wilbur. 1976. When does structure stop and style begin? Syntax, morphology, and phonology vs. stylistic variation in American Sign Language. Chicago Linguistic Society 12. 376–​396. Kimmelman, Vadim & Lianne Vink. 2017. Question-​ answer pairs in Sign Language of the Netherlands. Sign Language Studies 17(4). 417–​449. Kratzer, Angelika & Elisabeth Selkirk. 2007. Phase theory and prosodic spellout: The case of verbs. The Linguistic Review 24. 93–​135. Krifka, Manfred, Francis J. Pelletier, Gregory N. Carlson, Alice Ter Meulen, Gennaro Chierchia, & Godehard Link. 1995. Genericity: An introduction. In Gregory N. Carlson & Francis J. Pelletier (eds.), The generic book, 1–​124. Chicago: University of Chicago Press. Kubus, Okan. 2010. Relative clause constructions in Turkish Sign Language (Türk İsaret Dili –​ TİD). Poster presented at Theoretical Issues in Sign Language Research (TISLR 10), Purdue University, West Lafayette, IN, 30 September–​2 October, 2010. Kubus, Okan. 2014. Relative clause constructions in Turkish Sign Language. Hamburg: University of Hamburg PhD dissertation. Lackner, Andrea. 2017. Functions of head and body movements in Austrian Sign Language. Berlin and Nijmegen: De Gruyter Mouton & Ishara Press.

562

563

Non-manual markers Lewis, David. 1975. Adverbs of quantification. In Edward Keenan (ed.), Formal semantics of natural languages, 3–​15. Chicago, IL: University of Chicago Press. Liddell, Scott K. 1977. An investigation into the syntax of American Sign Language. San Diego, CA: University of California PhD dissertation. Liddell, Scott K. 1978. Nonmanual signals and relative clauses in American Sign Language. In Patricia Siple (ed.), Understanding language through sign language research, 59–​90. New York: Academic Press. Liddell, Scott K. 1980. American Sign Language syntax. The Hague: Mouton. Liddell, Scott K. 1986. Head thrust in ASL conditional marking. Sign Language Studies 52. 244–​262. Lillo-​Martin, Diane. 1986. Two kinds of null arguments in ASL. Natural Language and Linguistic Theory 4. 415–​444. MacLaughlin, Dawn. 1997. The structure of determiner phrases: Evidence from American Sign Language. Boston, MA: Boston University PhD dissertation. Malaia, Evie, Joshua D. Borneman, & Ronnie B. Wilbur. 2016. Assessment of information content in visual signal: Analysis of optical flow fractal complexity. Visual Cognition 24(3). 246–​251. Malaia, Evie, Joshua D. Borneman, & Ronnie B. Wilbur. 2018. Information transfer capacity of articulators in American Sign Language. Language and Speech 61(1). 97–​112. Malaia, Evie & Ronnie B. Wilbur. 2012. Kinematic signatures of telic and atelic events in ASL predicates. Language and Speech 55(3). 407–​421. Malaia, Evie & Ronnie B. Wilbur. 2018. Information transfer capacity of articulators in American Sign Language. Language & Speech 61(1). 97–​112. Malaia, Evie & Ronnie B. Wilbur. 2019. Syllable as a unit of information transfer in linguistic communication: The Entropy Syllable Parsing model. WIREs Cognitive Science e1518. Malaia, Evie, Ronnie B. Wilbur, & Marina Milković. 2013. Kinematic parameters of signed verbs at the morpho-​phonology interface. Journal of Speech, Language and Hearing Research 56. 1677–​1688. Martell, Craig. 2005. FORM:  An experiment in the annotation of the kinematics of gesture. University of Pennsylvania PhD dissertation. McCullough, Stephen, Karen Emmorey, & Martin Sereno. 2005. Neural organization for recognition of grammatical and emotional facial expressions in deaf ASL signers and hearing nonsigners. Cognitive Brain Research 22(2). 193–​203. Neidle, Carol, Judy A. Kegl, Dawn MacLaughlin, Benjamin Bahan, & Robert G. Lee. 2000. The syntax of American Sign Language: Functional categories and hierarchical structure. Cambridge, MA: MIT Press. Nespor, Marina & Wendy Sandler. 1999. Prosody in Israeli Sign Language. Language & Speech 42. 143–​176. Nespor, Marina & Irene Vogel. 1986. Prosodic phonology. Dordrecht: Foris. Newport, Elissa & Richard Meier. 1985. The acquisition of American Sign Language. In Dan Slobin (ed.), The cross-​linguistic study of language acquisition, 881–​938. Hillsdale, NJ: Lawrence Erlbaum. Padden, Carol. 1983. Interaction of morphology and syntax in American Sign Language. San Diego, CA: University of California PhD dissertation. Partee, Barbara. 1991. Topic, focus and quantification. Semantics and Linguistic Theory 1. 257–​280. Pendzich, Nina. 2017. Lexical nonmanuals in German Sign Language (DGS): An empirical and theoretical investigation. Göttingen: Georg-​August-​Universität PhD dissertation. Pfau, Roland. 2005. Phrasal layers and prosodic spreading in sign languages. Talk presented at SIGNA VOLANT Workshop, Milan. Pfau, Roland. 2016. Non-​manuals and tones:  A comparative perspective on suprasegmentals. Revista de Estudos Linguísticos da Univerdade do Porto 11. 19–​58. Pfau, Roland & Josep Quer. 2002. V-​to-​Neg raising and negative concord in three sign languages. Rivista di Grammatica Generativa 27. 73–​86. Pfau, Roland & Josep Quer. 2010. Nonmanuals: Their grammatical and prosodic roles. In Diane Brentari (ed.), Sign languages, 381–​402. Cambridge: Cambridge University Press. Puupponen, Anna, Tuija Wainio, Birgitta Burger, & Tommi Jantunen. 2015. Head movements in Finnish Sign Language on the basis of motion capture data: A study of the form and function of nods, nodding, head thrusts, and head pulls. Sign Language & Linguistics 18(1). 41–​89.

563

564

Ronnie B. Wilbur Ramchand, Gillian & Charles Reiss (eds.). 2007. The Oxford handbook of linguistic interfaces. Oxford: Oxford University Press. Reilly, Judith, Marina McIntire, & Ursula Bellugi. 1990. The acquisition of conditionals in American Sign Language:  Grammaticized facial expressions. Applied Psycholinguistics 11(4). 369–​392 Sandler, Wendy. 1999a. Cliticization and prosodic words in a sign language. In Tracy Hall & Ursula Kleinhenz (eds.), Studies on the phonological word, 223–​255. Amsterdam: John Benjamins. Sandler, Wendy. 1999b. The medium and the message: Prosodic interpretation of linguistic content in sign language. Sign Language & Linguistics 2(2). 187–​216. Sandler, Wendy. 1999c. Prosody in Israeli Sign Language. Language and Speech 42(2–​3). 127–​142. Sandler, Wendy. 2005. Prosodic constituency and intonation in a sign language. In Helen Leuninger & Daniela Happ (eds.), Gebärdensprachen: Struktur, Erwerb, Verwendung (Linguistische Berichte Special Issue 15), 59–​86. Hamburg: Buske. Sandler, Wendy. 2010. Prosody and syntax in sign languages. Transactions of the Philological Society 108(3). 298–​328. Sandler, Wendy. 2012. Visual prosody. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 55–​76. Berlin: De Gruyter Mouton. Sandler, Wendy & Diane Lillo-​ Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press. Scheer, Tobias. 2010. A guide to morphosyntax-​phonology interface theories. Berlin:  De Gruyter Mouton. Selkirk, Elisabeth. 2011. The syntax-​phonology interface. In John Goldsmith, Jason Riggle, & Alan Yu (eds.), The handbook of phonological theory, 435–​484. Oxford: Wiley-​Blackwell. Thompson, Robin, Karen Emmorey, & Robert Kluender. 2006. The relationship between eye gaze and verb agreement in American Sign Language: An eye-​tracking study. Natural Language & Linguistic Theory 24(2). 571–​604. Wagner, Michael. 2005. Prosody and recursion. Cambridge, MA: MIT PhD dissertation. Weast, Traci. 2008. Questions in American Sign Language: A quantitative analysis of raised and lowered eyebrows. Arlington, TX: University of Texas at Arlington PhD dissertation. Wilbur, Ronnie B. 1990. Why syllables? What the notion means for ASL research. In Susan D. Fischer & Patricia Siple (eds.), Theoretical issues in sign language research: Vol. 1. Linguistics, 81–​108. Chicago: University of Chicago Press. Wilbur, Ronnie B. 1991. Intonation and focus in American Sign Language. In Yongkyoon No & Mark Libucha (eds.), ESCOL ’90, 320–​331. Columbus, OH: OSU Press. Wilbur, Ronnie B. 1994a. Eyeblinks and ASL phrase structure. Sign Language Studies 84. 221–​240. Wilbur, Ronnie B. 1994b. Foregrounding structures in ASL. Journal of Pragmatics 22. 647–​672. Wilbur, Ronnie B. 1995. What the morphology of operators looks like:  A formal analysis of ASL brow-​raise. In Leslie Gabriele, Debora Hardison, & Robert Westmoreland (eds.), FLSM VI: Proceedings of the Sixth Annual Meeting of the Formal Linguistics Society of Mid-​America, 67–​78. Bloomington, IN: IULC Publications. Wilbur, Ronnie B. 1996. Evidence for the function and structure of wh-​clefts in ASL. In William Edmondson & Ronnie B. Wilbur (eds.), International review of sign linguistics, 209–​256. Hillsdale, NJ: Lawrence Erlbaum. Wilbur, Ronnie B. 1997. A prosodic/​pragmatic explanation for word order variation in ASL with typological implications. In Kee Dong Lee, Eve Sweetser, & Marjolijn Verspoor (eds.), Lexical and syntactic constructions and the construction of meaning, Vol. 1, 89–​104. Amsterdam: John Benjamins. Wilbur, Ronnie B. 2000. Phonological and prosodic layering of non-​manuals in American Sign Language. In Harlan Lane & Karen Emmorey (eds.), The signs of language revisited: Festschrift for Ursula Bellugi and Edward Klima, 213–​241. Hillsdale, NJ: Lawrence Erlbaum. Wilbur, Ronnie B. 2009. Effects of varying rate of signing on ASL manual signs and nonmanual markers. Language and Speech 52. 245–​285. Wilbur, Ronnie B. 2011. Nonmanuals, semantic operators, domain marking, and the solution to two outstanding puzzles in ASL. Sign Language & Linguistics 14. 148–​178. Wilbur, Ronnie B. 2017. Internally headed relative clauses in sign languages. Glossa 2(1): 25. 1–​34. Wilbur, Ronnie B. 2018. Production of signed utterances. In Eva M. Fernández & Helen M.I. Cairns (eds.), Handbook of psycholinguistics, 111–​135. Oxford: Wiley-​Blackwell.

564

565

Non-manual markers Wilbur, Ronnie B. & Aleix Martínez. 2002. Physical correlates of prosodic structure in American Sign Language. In Mary Andronis, Erin Debenport, Anne Pycha, & Keiko Yoshimura (eds.), Chicago Linguistics Society (CLS) 38. 693–​704. Wilbur, Ronnie B. & Lauren R. Nikolai. 2019. What does AU17 ‘flat chin’ mean? Paper presented at Automatic Recognition and Analysis of American Sign Language. University of Chicago, May 15, 2019. Wilbur, Ronnie B. & Cynthia Patschke. 1998. Body leans and marking contrast in ASL. Journal of Pragmatics 30. 275–​303. Wilbur, Ronnie B. & Cynthia Patschke. 1999. Syntactic correlates of brow raise in ASL. Sign Language & Linguistics 2. 3–​40. Wilbur, Ronnie B. & Lisa Petersen. 1998. Modality interactions of speech and signing in simultaneous communication. Journal of Speech, Language & Hearing Research 41. 200–​212.

565

566

25 GESTURE AND SIGN Theoretical and experimental perspectives Bencie Woll & David Vinson

25.1  Introduction Our ability to communicate is found in both the auditory and visual modalities, in sign languages and the co-​speech gestures which accompany spoken languages. Sign languages, the natural languages which are found in Deaf communities all over the world, create linguistic structure by means of systematic and conventionalized movements of the hands, face, and body (Klima & Bellugi 1979; Sutton-​Spence & Woll 1999; Emmorey 2002; Brentari 2010). Although co-​speech gestures are considered to be non-​linguistic, they are produced and perceived in tight semantic and temporal integration with speech (McNeill 1992, 2005; Kendon 2004), and non-​manual elements of co-​speech gesture (in particular, facial actions and prosody  –​both visual and vocal), can act as proxies for linguistic structures (Okrent 2002). Thus, language must be considered as a multimodal phenomenon (Kendon 2014; Vigliocco et al. 2014). The aim of this chapter is to describe manual and non-​manual gestures from both theoretical and experimental perspectives (for theoretical approaches to grammatical non-​ manual markers, see Wilbur, Chapter 23). Additionally, we will explore issues related to the grammaticalization or, perhaps more accurately, the ‘linguisticization’ of gesture. Taking gestures in spoken languages as well as sign languages into account, we will outline what the expressive resources of language look like both in spoken and sign languages. We will explore their roles in communication, cognition, and language processing, in the context of commonalities and contrasts in the brain’s network for auditory and visual components of human communication. Co-​speech gestures and signs have usually been studied separately and belong to different traditions of literature, but there have been recent attempts to present a joint perspective that seeks to understand the role of the visual modality in language in general by studying both gesture and sign (e.g., Goldin-​Meadow & Brentari 2015; Perniss et al. 2015a, 2015b). Studies comparing hearing speakers’ gestures with systems found in emerging sign languages and ‘homesign’ systems (e.g., Goldin-​Meadow 2003; Senghas et al. 2004) have shown that as gestures move towards sign language, idiosyncratic gestures used with speech are replaced by conventionalized expressions, and linguistic properties increase (McNeill 1992). This process has traditionally been viewed as a form of 566

567

Gesture and sign

grammaticalization. Pfau and colleagues have explored the description of sign language grammaticalization in a number of pioneering studies (see Pfau & Steinbach 2006, 2011; van Loon et al. 2014). Here, we consider processes of lexicalization of gesture separately from grammaticalization, with the term ‘linguisticization’ covering both processes as well as direct changes from non-​linguistic to grammatical elements without any intervening lexical stage (cf. Path 2 in Wilcox (2004, 2007)), as, for example, in the case of gestural facial expressions or rising pitch representing uncertainty, linguisticizing into question prosody in signed and spoken languages, respectively. This permits lexicalization to be used specifically to refer to the process by which gestures become lexical signs, and grammaticalization in sign language to parallel spoken language, where lexical linguistic elements are restructured into grammatical elements, as for example the modal CAN in British Sign Language (BSL), which may be related to the lexical sign UNDERSTAND (Johnston & Schembri 2010). Despite the significant ways in which sign languages and gestures differ, there is now a clearer understanding that they share pragmatic, semantic, and cognitive functions as well as both exploiting the visual modality in relation to iconic and indexical properties. Such properties may be more complex to express within the auditory modality. The perspective adopted here will demonstrate how communication in the visual modality reflects modality-​specific as well as modality-​independent aspects of the human language capacity and the extent to which a common cognitive and neural architecture underpins linguistic and non-​linguistic communication across modalities.

25.2  The visual modality in spoken language Gestures are defined by Kendon as visible actions of the hand, body, and face that are intentionally used to communicate and are expressed together with the verbal utterance (Kendon 2004). They are universal in the sense that all speaking communities around the world produce gestures, even though they may differ in relation to the communicative and social value of gesturing and the frequency of its use (Kita 2009; Chu et al. 2014). Gesture is involved whenever individuals speak. Gesture use begins in the first year of life and is also attested in the congenitally blind (Iverson & Goldin-​Meadow 1998; Özçalışkan et al. 2016). In spite of the close links between gesture and language, most grammatical theories and linguistic descriptions do not include gesture. Recognition of the relationship between gesture and language has been even more recent (Kendon 1986; McNeill 1992) than the recognition that sign languages are natural languages.

25.2.1  Forms and functions of gestures in language While some gestures, such as representational gestures, abstract points, and beats occur as accompaniments to speech, other gestures, such as emblems or interactional gestures, can replace or complement speech in an utterance or can be used without speech. Gestures allow speakers to create ‘composite signals’ (Clark 1996) by making use of the specific representational capacities of each modality (visual and auditory). Different forms of gestures fulfill different semantic and communicative functions when used with speech (see Ӧzyürek 2012). For example, in so-​called ‘emblems’ there is

567

568

Bencie Woll & David Vinson

an arbitrary relationship between their form and the meaning they convey, and they serve very similar functions to lexicalized words. ‘Representational’ gestures (also referred to as ‘iconic’ gestures) bear a more visually motivated relation between their form and the referent, action, or event they represent. For example, a stirring hand movement accompanying a spoken utterance about cooking bears a resemblance in form to the actual act of stirring. The link may also be metaphorical (see the discussion of time metaphors in Section 25.2.2.1 below). Even though representational gestures are visually motivated, the interpretation of their meaning relies heavily on the speech they accompany. Experimental studies have shown that in the absence of speech, the meaning of these gestures is highly ambiguous and not at all transparent from their form (Krauss et  al. 2000). When these gestures occur, they almost always overlap with semantically relevant speech –​which supports the disambiguation of their meaning: speech and such gestures form a co-​expressive ensemble. Representational gestures vary in terms of their semiotic characteristics –​that is, in the way in which they represent objects, actions, or events –​revealing modality-​specific means of conveying or depicting information such as the different visual perspectives of speakers to events, size, three-​dimensional characteristics, shapes, relative spatial relations among objects, etc. (McNeill 1992; Müller 2009; Tversky 2011; Debrelioska et al. 2013). Their meaning derives from the holistic representation of the image they represent, rather than from a combinatorial representation of individual meaning units such as those we see in spoken languages. As such they mostly fulfill the depictive aspects of communication –​representing and re-​enacting objects and events in a visible way in the shared space between the speaker and the interlocutor. ‘Points’ accompany demonstrative forms and pronouns in discourse, specifying referents, places, and locations (for a review, see Peeters & Ӧzyürek (2016)). Meaning is primarily conveyed by the direction of the point, linking the referent to the object/​ space, and fulfilling indexical aspects of reference. Such pointing gestures can either be orientated towards objects in the environment (objects or locations in the here-​and-​now of the discourse), or point to meaningful abstract locations in the space in front of the speaker. Pointing to abstract locations allows speakers to express relationships among the referents in their discourse (e.g., McNeill et al. 1993; Perniss & Ӧzyürek 2015). ‘Beats’ (meaningless repetitive hand movements) can be used to emphasize parts of speech at the information structure level to express focus. ‘Interactional’ gestures (e.g., a gesture for ‘I don’t know’) are used to regulate different aspects of dialogic interaction (e.g., expressing stance, turn taking) between the conversational partners (Bavelas et al. 1992). Non-​manual gestures are primarily articulated by the face and head, but other parts of the body can also be involved. For example, pointing gestures can be articulated by the lips and feet (Kita 2003). Head movements and face actions (including eyes, brows, and mouth) serve interactional functions, for example, affirmation; and face and mouth actions can also serve representational functions (for example, size). When all types of gestures are considered, it is clear that they have very similar functions to lexical, semantic, pragmatic, discourse, and interactional features of spoken languages –​but allow speakers to also convey aspects of messages (e.g., iconic, indexical) that cannot be conveyed through the linguistic structures of spoken language.

568

569

Gesture and sign

25.2.2  Role of gesture in language processing 25.2.2.1  Production The production of co-​speech gestures is closely linked to the production of the linguistic message conveyed in speech. This is evident especially when we consider the close timing between gestures and speech during production. A co-​speech gesture is produced along with the relevant part of speech, and together they express a communicative act. Although not all spoken words are accompanied by gestures, when co-​speech gestures are produced, they are almost always temporally aligned with a spoken utterance. Gesture and speech have been argued to share an underlying conceptual message (McNeill 1992; Bernardis & Gentilucci 2006) although the contributions of these communicative channels may be supplementary to, or redundant with, one another; and the representational formats of speech and gesture differ (McNeill 1992; de Ruiter et  al. 2012), with speech being categorical and discrete, and gesture continuous and analogue.1 Experimental evidence that speech and gesture are tightly linked comes from studies demonstrating that the timing and co-​expressive meaning alignment between speech and gesture vary systematically between typologically different languages (Kita & Ӧzyürek 2003; Defina 2016; Floyd 2016; Gu et al. 2017). One demonstration of this is how people speak and gesture about motion events across languages. Languages vary in how they linguistically encode the path and manner of motion events (Talmy 1985), and the gestures that speakers of a given language use have been shown to reflect this variation. Speakers of Japanese and Turkish (contra English) are unlikely to tightly package information about both path and manner within a linguistic clause (e.g., rather than ‘she runs down the stairs’ they will say ‘she runs and goes down the stairs’). Likewise, when gesturing about motion events, speakers of Japanese and Turkish (contra English) are unlikely to encode both path and manner within a single gesture, preferring either to represent only one element or split the two elements into separate expressive gestures (Kita & Özyürek 2003; Özyürek et  al. 2005). In English, however, since it is grammatically possible to express manner and path within one linguistic clause (‘she ran down the stairs’), speakers are more likely to express both components in a single gesture. That is, gestures in each language appear to be shaped by the syntactic and semantic packaging of information at the clause level. Recently, Defina (2016) has shown that speakers of Avatime (an indigenous spoken language of Australia), which has serial verb constructions, also package two or more semantic elements into a single gesture. A second area where gesture may vary in relation to linguistic structure is in the expression of spatial frames of reference. Speakers of languages that preferentially express spatial relationships using cardinal directions (e.g., east, west, north, and south) rather than egocentric ones (e.g., left, right), also tend to express cardinal relationships in their gestures (see, for instance, Haviland (1993) for Guugu Yimidhirr, a language of Northern Australia). Gu et  al. (2017) have also shown that time metaphors used by speakers of Chinese are based along the vertical dimension, and that this is reflected in their gestures, unlike English speakers’ speech and gestures in which time is reflected along a horizontal axis. Finally, in some languages, gestures may consistently express semantic information not expressed in speech. Floyd (2016) has shown that speakers of the Brazilian indigenous language Nheengatú use pointing gestures ‘adverbially’ to indicate time. While speech in

569

570

Bencie Woll & David Vinson

Nheengatú gives information about time in general terms (for example, morning, noon, evening), celestial pointing gestures indicate more specific times of the day (e.g., 10 in the morning). Speakers’ judgments about these composite utterances have also shown that they rely on gestures for further specification of time. Floyd argues that there is no a priori reason for linguistic properties not to develop in the visual practices (i.e., gestures) that accompany spoken language. The studies presented above show that gestures are informed by the lexical, syntactic, and pragmatic possibilities of different languages. This poses interesting questions about how speakers coordinate speech and gesture during production to achieve the semantic and temporal alignment they co-​express. This brings us to those language production models which take into account the close links between speech and gesture. Two different types of speech and gesture production model have been proposed: the first views gestures as an independent but parallel expressive channel (de Ruiter 2000; Krauss et al. 2000); in the second type of model, speech and gesture interact during the formulation of the linguistic message at different levels (McNeill 1992; Kita & Ӧzyürek 2003). These models also differ in terms of whether they consider gesture as part of the communicatively intended message (de Ruiter 2000; Kita & Ӧzyürek 2003) or as independently produced (Krauss et al. 2000). According to Krauss and colleagues, gestures are generated from images in working memory, which might help to prime lexical items cross-​modally. They are not necessarily assumed to be communicative. In de Ruiter’s (2000) model, gestures are generated from conceptualizations intended to be part of the communicative message, but during production, the information to be conveyed in both modalities is split and expressed through independent channels. In interactionist models, for example that of McNeill (1992, 2005), gesture and speech are derived from an initial single unit, which McNeill refers to as a ‘Growth Point’, composed of both imagistic and linguistic representations, with both gesture and speech being manifestations of this combined unit of representation. In another interactionist view –​the Interface Hypothesis –​proposed by Kita & Ӧzyürek (2003) and modified by Chu & Kita (2016), representational gestures and speech are characterized as originating from different representations: gestures from imagistic/​ action representations and language from propositional representations. During the language production process, however, both representations interact at the level of linguistic formulation. Figure  25.1 compares different models proposed for speech and gesture production.

Figure 25.1  Adapted schematic overview of different models in relation to speech and gesture production (from left to right: Morrel-​Samuels & Krauss (1992); de Ruiter (2000), Kita & Ӧzyürek (2003) (taken from Wagner et al. (2014) with permission))

570

571

Gesture and sign

25.2.2.2  Comprehension As in the case of speech production, there is growing evidence that gestures are integrated with speech comprehension. It has been long noted that conversational addressees obtain information from gestures that accompany speech. Even though most models of speech and gesture have focused on production, recent research has also provided ample evidence that addressees integrate the information coming from both modalities during comprehension. For example, Kelly et al. (1999) showed participants video stimuli where gestures conveyed information additional to what was conveyed in speech (e.g., producing a gesture pantomiming drinking, while the speech channel is ‘I stayed up all night’) and asked them to write down what they heard. In addition to the speech they heard, participants included information that was conveyed only in gesture and not in speech (i.e., ‘I stayed up drinking all night’). In another study, Beattie & Shovelton (1999) showed that questions about the size and relative position of objects in a speaker’s message were answered more accurately when gestures were part of the description and conveyed information additional to speech than when they were perceived only in the auditory modality. In a priming study by Kelly et al. (2010), participants were presented with action primes (e.g., someone chopping vegetables) followed by targets comprising speech accompanied by gesture. They were asked to press a button if what they heard in speech or saw in gesture depicted the action prime. Participants related primes to targets more quickly and accurately when they contained congruent information (speech:  ‘chop’; gesture:  chop) than when they contained incongruent information (speech:  ‘chop’; gesture:  twist). Moreover, the degree of incongruence between overlapping speech and gesture affected processing, with fewer errors for weak incongruities (speech: ‘chop’; gesture: cut) than for strong incongruities (speech: ‘chop’; gesture: open). This indicates that in comprehension, the semantic relations between the two modalities are taken into account, providing evidence against independent processing of the two channels. Furthermore, and crucially, this effect was bidirectional and was found to be similar when either speech or gesture targets matched or mismatched the action primes. That is, gesture influences processing of speech and speech influences processing of gesture. Further research has shown that gestures also show semantic priming effects. For example, Yap et al. (2011) have shown that iconic gestures presented without accompanying speech (highly conventionalized gestures such as flapping both hands at the side of the body to mean ‘bird’) prime the sequentially presented spoken target words. Finally, evidence for semantic integration between representational gestures and speech is also found in many neurocognitive studies. Several studies have shown that comprehension of iconic gestures activates brain processes known to be involved in semantic processing of speech. First of all, gestures modulate the electrophysiological component N400 (e.g., Ӧzyürek et al. 2007), which has previously been found to be sensitive to the ease of semantic comprehension of words in relation to a previous context. Second, viewing iconic gestures in the context of speech (matched or mismatched) recruits the left-​lateralized frontal-​posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG), and superior temporal gyrus/​sulcus (STG/​S)) known to be involved in semantic integration of words in sentences (see Hagoort & van Berkum (2007), Emmorey & Ӧzyürek (2014), and Ӧzyürek (2014) for a broader overview). Recent studies by Peeters and colleagues (2015, 2016, 2017) have also shown that match/​mismatch between an element expressed in speech (e.g., ‘apple’) and pointing to a referent (e.g., an apple) evokes semantic integration, as indexed by modulation of N400, 571

572

Bencie Woll & David Vinson

and also recruits IFG, MTG, and STG/​S –​as also found for iconic gestures in the above-​ mentioned studies.

25.2.3  Conclusions: gesture The above findings show that gestures are an integral part of language at the level of semantics, syntax, pragmatics, and discourse. Gestures, because of the affordances of the visual modality, can subserve ‘indicating’ and ‘depicting’ aspects of communication –​albeit in a different representational format from that found in speech. In visible ways, they allow the grounding of concepts conveyed to the visual here-​and-​now by the speech component, either by overtly linking speech to objects through pointing, or by re-​enacting them in a virtual space created among the conversational participants, to convey analogue representations of events. In doing so, they play a role in the conceptualization, formulation, and comprehension of utterances. They also recruit language networks in the brain during processing. Thus, although their visual and semiotic properties differ from the linguistic units of spoken language, gestures are an integral part of our language capacity. They are integrated into language-​specific semantic and structural aspects of spoken language and interact with spoken language during the production and processing of utterances. Now we turn to how the visual modality is recruited in sign languages:  languages created by Deaf communities, who communicate entirely in the visual modality, and where visible bodily articulators alone express all functions of language and communication.

25.3  Sign language and language modality Following the groundbreaking work by linguists and cognitive scientists over the last 50 years, it is now recognized that sign languages of Deaf communities, such as American Sign Language (ASL) and BSL, are not idiosyncratic compilations of silent gestures/​ pantomimes but are structured and processed in a similar manner to spoken languages (Emmorey 2002; Sandler & Lillo-​Martin 2006; MacSweeney et  al. 2008). The striking difference is that they operate in a wholly non-​auditory, visual-​spatial modality. In this section, we summarize which aspects of sign language structure and processing are similar to those found in spoken language regardless of the differences in modality. Second, we illustrate which aspects of sign languages reveal modality-​specific features and how these compare not only to speech but also to the gestural properties that we observe in spoken languages as discussed above. Sign languages thus offer unique insights about the modality-​independent versus modality-​specific (e.g., iconic, embodied) aspects of our language capacity and its cognitive and neural architecture. Before we begin, it is important to say a few words about the social and linguistic contexts in which sign languages are most likely to emerge. Just as all languages need a community of users, sign languages need a Deaf community, which can only come into existence where deaf people are in contact with one another. Although there are descriptions of deaf people’s signing going back hundreds of years, the establishment of schools for deaf children, starting in the late eighteenth century in Europe, triggered the creation of Deaf communities and sign languages as we know them today. At these schools, communication between children and teachers resulted in the conventionalization of the gestures used by isolated deaf individuals (Goldin-​Meadow 2003; Coppola & Newport

572

573

Gesture and sign

2005). In many countries, education for deaf children has only recently begun, and this can provide an environment in which new sign languages can emerge. Kegl et al. (1999) have described how the establishment of the first school for deaf children in Nicaragua in the 1980s led to the emergence of a national sign language. There are also village communities where an unusually high incidence of deafness results in a sign language used by both deaf and hearing people, even in the absence of schools (Sandler et al. (2005); see Woll & Ladd (2010) and Nyst (2012) for a review of a number of these). Where deaf children are not exposed to a sign language, they invariably develop systematic gestural communication within their families. Unlike sign language, ‘homesign’ systems do not have a full linguistic structure. They may have rudimentary features of linguistic structure such as a lexicon, simple morphology, segmentation, and consistent word order. However, there is evidence that homesigners cannot master a sign language fully if they are exposed to one only after childhood (i.e., later than 6–​7 years). In other words, homesign does not serve as a ‘first’ language when homesigners are later exposed to a conventionalized sign language (Mayberry 2010). Although the emergence and sociolinguistic context of sign languages differs from spoken languages, which are  –​with rare exceptions  –​transmitted by native speakers and in which the only ‘new’ languages are creoles derived from contact between two different spoken languages, the sign languages used by Deaf communities display many of the linguistic structures we see in spoken languages. It is clear that when deaf individuals are able to communicate with each other, in a very short amount of time, the communication systems they use show substantial divergence from the gestures used by speaking people. For example, Senghas et al. (2004) have investigated co-​speech gestures used by Spanish speakers in expressing simultaneously occurring manner and path (for example, ‘run-​downhill’) and compared them to signs produced by three cohorts of Nicaraguan signers (Cohort 1: deaf child and adolescent homesigners; Cohort 2: deaf homesigners brought together as children and exposed to each other and to the communication of Cohort 1; Cohort 3: deaf children exposed to Cohorts 1 and 2). While gestures produced by Spanish speakers expressed manner and path elements in one gesture, signers of Cohorts 2 and 3 segmented them –​akin to the separate verbs for manner and path seen in some spoken languages, as discussed in Section 25.2.2 –​and also combined them sequentially to express simultaneity of manner and path. A recent study by Ӧzyürek et al. (2014) has observed similar developmental changes in Turkish deaf children’s homesign compared to their hearing Turkish caregivers’ gestures and the gestures of hearing Turkish-​speaking children. Even though Turkish homesigners occasionally segmented elements of manner and path into separate units, they were more likely to produce gestures that expressed both elements in one unit, and their patterns thus resemble those of the first Cohort of Nicaraguan signers rather than the 2nd and 3rd Cohort signers (Ӧzyürek et al. 2014). These studies indicate that in the context of deafness, visual communication goes beyond the expressive possibilities of the gestures used by speakers. However, as we will discuss below, some similarities between the two systems (e.g., the role of iconicity) exist because of the shared affordances of modality and articulators between gesture and sign and because sign languages may also make use of gestures, as spoken languages do. When comparing sign language with gesture and spoken language, we can see both modality-​ independent and modality-​ specific features at various levels, phonology

573

574

Bencie Woll & David Vinson

(Section 25.3.1.1), morphology and syntax (Section 25.3.1.2), and in particular, in relation to the exploitation of space for grammatical purposes (Section 25.3.2). Although structuring at different linguistic levels is similar across signed and spoken languages, the ways in which these structures can be expressed show modality-​specific patterning. There are also areas of sign language structure for which there is debate about whether they should be analyzed solely in terms of abstract, categorical, grammatical structures, or whether their analysis needs to take into account gestural form-​meaning mappings and/​ or iconic correspondences.

25.3.1  Modality-​independent and modality-​dependent aspects of sign languages The sign languages used by Deaf communities are complex natural human languages and are not derived from the spoken languages of the surrounding hearing communities –​even though, due to contact, some features from spoken languages can influence sign language structures. Meier (2002) lists a number of the non-​effects of modality (i.e., the shared properties of spoken and signed languages), which are given in (1). (1) a. Conventional vocabularies: learned pairing of form and meaning; b. Duality of patterning: meaningful units built of meaningless sublexical units, whether orally or manually produced; c. Productivity: new vocabulary may be added to signed and spoken languages; d. Syntactic structure: i. Same word classes in spoken and signed languages: nouns, verbs, and adjectives; ii. Embedding of clauses and recursion; iii. Trade-​offs between word order and morphological complexity in how grammatical relations are marked; e. Acquisition: similar timetables for acquisition of signed and spoken language. Below we compare some of the features of sign languages to those found in spoken languages, including where these are expressed differently due to different affordances of the visual modality. In many cases, especially where visual motivation is evident (i.e., iconicity), these features may also resemble those found in co-​speech gesturing.

25.3.1.1  Phonology Since Stokoe (1960), linguists have seen the phonological structure of signs as consisting of simultaneous combinations of configuration(s) of the hand(s), a location where the sign is articulated, and movement –​either a path through signing space or an internal movement of the joints in the hand. Each is understood to be a part of the phonology, because changing one of these parameters can create a minimal pair (see Figure 25.2). There have been considerable modifications to Stokoe’s framework since 1960, but this model has remained the basic description of sign language phonology (see van der Kooij & van der Hulst (Chapter 1) for discussion).

574

575

Gesture and sign

Figure 25.2  Minimal pairs in BSL (British Sign Language)

25.3.1.2  Morphology and syntax Sign language morphology tends to manifest itself in simultaneous combinations of meaningful handshapes, locations, and movements, rather than in sequential affixation. For example, handshape can change to reflect numbers. In BSL,2 n-​WEEKS, n-​O’CLOCK, and n-​YEARS-​OLD are articulated with conventionalized location and movement, while the handshapes (e.g., of ‘3’ or ‘5’) incorporated into such time signs indicate the number (e.g., 3-​WEEKS, 5-​YEARS-​OLD). Signs referring to objects and actions may also differ only in movement:  the verbs LOCK, READ-​NEWSPAPER, and EAT, for instance, are made with long movements, compared to the derivationally related nouns KEY, NEWSPAPER, and FOOD, which have short, repeated movements, as is illustrated for the pair KEY /​LOCKV in Figure 25.3.

LOCKV

KEY

Figure 25.3  Movement contrast between the derivationally related BSL signs KEY and LOCKV

575

576

Bencie Woll & David Vinson

Other morphological features are also realized by changes in movement and location. Thus, degree is shown through size, speed, onset speed, and length of hold in a movement, with, for example, LUCKY having a smaller, faster, and smoother movement than VERY-​LUCKY. Movement changes conveying temporal aspect are frequently visually motivated, so that repeated actions or events are shown through reduplication of the sign; duration of an event is paralleled by duration of the sign (signs for shorter events being articulated for less time than signs for longer events), and when an event has a clear endpoint, the movement of the sign is also characterized by a clear endpoint (Wilbur 2008; Strickland et al. 2015). Signs can also change handshape to indicate how a referent is handled (Supalla 1986). For instance, in the BSL examples (2)  and (3), the verb HAND-​OVER involves the same pattern of movement –​with the hand moving to the location of a virtual or real recipient (indicated by subscript ‘3’) –​but the different handshapes reflect the shape of a hand as if holding a thin object (flower stem) or a curved object (so-​called classifier (CL) handshapes, see Tang et al. (Chapter 7) for discussion). In contrast, the verb GIVE does not change handshape, regardless of what is being given. (2) 

FLOWER 1HAND-​OVER-​CL:hold-​long-​thin-​object(

)3

‘I give him/​her a flower.’ (3)

ICE-​CREAM 1HAND-​OVER-​CL:hold-​curved-​object(

)3

‘I give him/​her an ice cream (cone).’ Both spoken and signed languages articulate lexical items sequentially. However, in sign languages, the availability of two articulators enables the extensive use of simultaneously articulated structures (Vermeerbergen et al. 2007). The two hands can be used to represent the relative locations and movements of two referents in space and their topic-​comment relations. Thus, simultaneity is an aspect of sign languages that allows the expression of distinctions found in spoken languages but in a different, visual, format.3 Below we will see that this feature also allows the depiction of iconic structures similar to those found in gestures accompanying spoken languages. Sign languages exploit the use of space for grammatical purposes, preferring dimensionality (the analogue representation of size and shape) and simultaneity in syntax, while spoken languages prefer linearization and affixation. In earlier literature, on ASL in particular (e.g., Poizner et al. 1987; Padden 1988), two uses of space for linguistic purposes were contrasted. Topographic space was described as being used to depict spatial relationships and to map referents onto a representation of real space (spatialized grammar), while syntactic space was conceived of as an exploitation of space for purely grammatical purposes, without any mapping to real-​world spatial relationships (see Figure 25.4 for examples of sentences illustrating these different uses of space). These models were very closely related to the linguistic descriptions used for spoken languages, such as pronouns, agreement in person and number, etc. We will see below that recently, some of these analyses have been questioned, and many researchers now see them as more closely involving iconic gestural features.

576

577

Gesture and sign

a.

FILM

INDEX

LIKE-NOT

FRIEND

‘(My) friend didn’t like that film.’

b.

TABLE

BOOK

PEN

‘The pen is next to the book on the table.’

CL: LONG-THIN

CL:flat ------------------------------

Figure 25.4  (a) Example of ‘syntactic space’: the referent ‘film’ is located in the upper right of signing space by means of an index, but this does not map onto any real-​world location; (b) Example of ‘topographic space’ (spatialized grammar): in the predicate, the referents ‘book’ and ‘pen’ are replaced with classifiers (CL) for ‘flat object’ and ‘long thin object’, respectively, and these handshapes are located adjacent to each other and at the height in signing space of the sign ‘table’

25.3.2  Iconic and gestural elements in sign language While significant progress has been made by treating sign languages as having many features in common with those described for spoken languages, the ways in which sign languages differ from spoken languages may have implications for how they are processed and understood by language users.

25.3.2.1  Iconicity Perhaps the most obvious of these differences is the way in which sign languages exploit the visual modality through iconicity. Iconicity refers to the resemblance between an object or action and the word or sign used to represent that object or action. Early studies indicated that iconicity might not play a major role in sign language structure or processing. Klima & Bellugi (1979) provide a detailed discussion of what is meant by a sign being ‘iconic’, pointing out that (i)  many signs in ASL (and other sign languages) are non-​iconic; (ii) iconic signs vary from one sign language to another, since different visual motivations for a sign form may be selected (e.g., OLD represented by wrinkles in BSL and by a beard in ASL); and (iii) iconic signs are conventionalized forms, subject to regular processes of phonological change (see Schlenker, Chapter 23, for recent theoretical perspectives on iconicity). There have been contrasting findings in relation to the role iconicity plays in sign language processing at the lexical level. Poizner et al. (1981) showed that highly iconic signs were not more easily remembered than signs that are

577

578

Bencie Woll & David Vinson

highly opaque; Atkinson et al. (2005) reported that signers with word-​finding difficulties following stroke found iconic signs no easier to retrieve than non-​iconic signs (also see Section 25.4.2 on atypical sign language); and Meier et al. (2008) suggested that iconicity is not a factor in early sign language acquisition by deaf children. More recent studies have suggested that iconicity does have a role in the structure of the lexicon and grammar of sign language as well as in processing and learning (Taub 2001; Perniss et al. 2010; Emmorey 2014; Strickland et al. 2015). For example, Thompson et al. (2012) report that iconic signs are learned earlier than non-​iconic signs. Thompson et  al. (2010) also report effects of iconicity on phonological decision tasks. Studies of ASL at the narrative and discourse level have suggested that in order to understand ASL, the addressee must process ‘surrogates’ (Liddell 2003) or ‘depictions’ (Dudis 2004) produced by the signer. Both Liddell and Dudis argue that the signer creates a visual scene and ‘paints a picture’ for the addressee (close to Clark’s (2016) notion of ‘depiction’ and ‘indication’ discussed above in relation to gesture), utilizing the visual medium and signing space to convey meanings in ways that are difficult, if not impossible, in spoken languages –​except when conveyed through gesture. Thus, these authors suggest that sign languages are produced and understood in ways that are very different from spoken languages. While this might suggest that iconic signs and gestural depiction are identical, they are not the same. More accurately, iconic depiction and indication can be seen not as sign language-​specific constructions but rather as modality-​specific forms, as revealed both in signing (ASL) and speaking, when we take into account the gestures used by speakers (Quinto-​Pozos & Parrill 2015).

25.3.2.2  Representation of motion events in sign and gesture Researchers working on a number of sign languages (e.g., Engberg-​Pedersen (1993) on Danish Sign Language; Liddell (1990) on ASL) argued that linguistic structures of sign language that use sign space to express space and syntactic features are not based on abstract linguistic properties but on inherent analogue locative relationships among people, objects, and places. Liddell’s later work (2003) went further, proposing that verbs such as HAND-​OVER are a mix of gestural and linguistic elements, and analyzing such signs as being composed of a linguistic part expressed by the handshape, and a gestural part linking the referent(s) to a locus. The debate about the status of ‘agreement’ in sign languages has continued, with an increasing number of sign language researchers describing the existence of mixed forms (i.e., structures involving both linguistic and non-​linguistic components), particularly in classifier constructions (see example (2) and Figure 25.4 above). While Slobin et al. (2003), Morgan & Woll (2007), and Pfau et al. (2018) have argued for a wholly linguistic perspective, Schembri and colleagues (2005, 2018) have followed the arguments of Engberg-​Pedersen and Liddell. Schembri et  al. (2005) compared the representation of motion events by hearing non-​signers using gesture without accompanying speech and by native signers of three different sign languages. They found that the descriptions of motion events produced by the hearing gesturers showed significant points of correspondence with the signed constructions. In both cases, the location and movement parameters were similar, and the handshape component showed the greatest differences. They argue therefore that these data are consistent with the claim that verbs of motion and location in sign languages are blends of gestural and

578

579

Gesture and sign

linguistic elements. However, more research is needed to understand the extent to which ‘gestural’ elements look similar across signed and spoken languages (Ӧzyürek 2012).

25.4  Sign language, gesture, and the brain The main focus in this section is on neuroscience studies of sign language processing. However, it should be noted that a substantial number of behavioral studies have also explored the cognitive mechanisms underlying sign language perception and production and revealed similarities between spoken and sign languages. These include studies of lexical segmentation (Orfanidou et al. 2010); lexical access –​exploring lexicality effects in tasks using signs and non-​signs (Corina & Emmorey 1993; Carreiras et al. 2008); and priming effects at all linguistic levels. Examples of the latter include studies of priming effects related to phonology (Dye et al. 2006), morphology (Emmorey 1991), syntax (Hall et al. 2015), and semantics (Bosworth & Emmorey 2010). There has been relatively little research on production but studies of tip-​of-​the-​finger (TOF) phenomena (Thompson et  al. 2005) and slips of the hand (Hohenberger et  al. 2002) reveal phonological and semantic effects in sign retrieval. For comprehensive reviews of sign language processing research, we refer to Emmorey (2002); for an overview of studies on sign language production, see Hohenberger & Leuninger (2012). The right hemisphere (RH) is known to be dominant for a number of visuospatial processing abilities, suggesting that there is an RH advantage for simultaneous processing (Hellige 1993). One might therefore infer that spoken language, being more linearized, is left-​lateralized, while sign language, which is perceived visually and uses space for grammatical purposes, might be either right-​lateralized or show more mixed lateralization.

25.4.1  Brain activation during language processing Neville et al. (1998) suggested that the bilateral activation they observed for ASL could be related to ASL’s use of space for linguistic purposes. However, evidence for the claim that similar brain structures are involved in signed and spoken languages in healthy subjects comes from a study by MacSweeney et al. (2002), which compared BSL sentences (including sentences with spatial grammar) presented to deaf and hearing native signers with audiovisual English presented to hearing monolingual speakers. The perception of BSL and audiovisual English sentences recruited very similar neural systems in native users of those languages. Both languages recruited the perisylvian cortex in the left hemisphere (LH). There was also RH recruitment by both English and BSL and no differences in the extent of recruitment of the RH (see Figure 25.5, columns 1 and 3). Presumably, this reflects the contribution of both hemispheres to comprehension of the sentences presented, whether the language is spoken or signed. Figure 25.5 also provides evidence that cerebral organization of language in Deaf signers utilizes a left-​lateralized frontal/​ temporal network as in speakers. The differences between Neville et al. and MacSweeney et al. underline the need for further studies, using a variety of different sign languages and also a variety of signers with different language experiences and backgrounds. We will see below, however, that there is some evidence for the recruitment of the right hemisphere for

579

580

Bencie Woll & David Vinson

sign language during processing of spatial language (Emmorey et al. 2013) and from lesion studies.

Figure 25.5  Images of the brain depicting (group) fMRI activation. Regions activated by BSL perception in Deaf and hearing native signers (first and second columns, respectively) and by audiovisual speech perception in hearing non-​signers (third column). For language in both modalities, and across all three groups, activation is greater in the left than the right hemisphere, and perisylvian regions are engaged (reprinted from MacSweeney et al. (2002) with permission)

25.4.2  Atypical signers Case studies of Deaf individuals with acquired brain damage have also provided evidence for LH dominance in processing sign languages as well as spoken languages. Deaf signers, like hearing speakers, exhibit language disturbances when left-​hemisphere cortical regions are damaged (e.g., Poizner et al. 1987; Hickok et al. 2002; Marshall et al. 2004; for review, see MacSweeney et al. 2008). Right hemisphere damage, although it can disrupt visual-​spatial abilities (including some involved in sign language processing of spatial language), does not lead to sign aphasia (Atkinson et al. 2005). At first glance, the robust nature of the left-​lateralized sign production system does not seem to be modulated by the presence of iconic forms in sign languages. Sign-​ aphasic patients are often unable to produce iconic signs in response to prompts such as ‘Show me the sign for “toothbrush” ’, but can produce the same actions elicited as pantomimed gesture –​‘How do you brush your teeth?’ (see Corina et al. 1992; Marshall et  al. 2004). That is, they show a dissociation between sign language (impaired) and gesture (unimpaired). Imaging studies, too, suggest that iconicity fails to influence the cortical regions activated in the production of sign language (see Emmorey et al. (2004) and San Jose-​Robertson et al. (2004) for further discussion of the role of iconicity in sign language production).

580

581

Gesture and sign

Another domain where we see modality-​dependent differences between signed and spoken languages is in the use of spatialized structures to express locative relationships. As mentioned in Section 25.3.2.2 above, Liddell (2003) argued that such constructions are partially gestural in nature. Several studies have found differential disruptions in the use and comprehension of sign language sentences that involve spatialized grammar compared to other grammatical constructions. For example, Atkinson and colleagues (2005), for their group study of left and right hemisphere-​damaged signers of BSL, devised tests that included comprehension of single sign and single predicate-​verb constructions (e.g., THROW-​DART), simple and complex sentences that varied in argument structure and semantic reversibility, locative constructions encoding spatial relationships, constructions involving lexical prepositions, and a final test of classifier placement, orientation, and rotation. Their findings indicated that left-​hemisphere-​ damaged (LHD) BSL signers, relative to elderly signing control subjects, although exhibiting deficits on all comprehension tests, were relatively better at classifier constructions than lexical prepositions. Right hemisphere-​damaged signers (RHD) did not differ from controls on single sign and single predicate-​verb construction, or on sentences with a variety of argument structures and semantic reversibility. They were, however, impaired equally on tests of locative relationships expressed via classifier constructions and prepositions, and on a test of placement, orientation, and rotation of objects. Hickok et al. (2009) also found the same patterns for classifier production in RHD and LHD patients. One interpretation of these findings is that the comprehension of spatial language constructions requires not only intact left-​hemisphere resources, but intact right hemisphere visual-​spatial processing mechanisms as well. That is, while both LHD and RHD signers show comprehension deficits, the right hemisphere-​damaged signers’ difficulties stem from more general visual-​spatial deficits rather than linguistic processing difficulties per se. The question of whether these visual-​spatial deficits are ‘non-​linguistic’ lies at the heart of the debate about modality differences.

25.4.3  Gesture and sign processing contrasted: brain studies By contrasting the processing of linguistically well-​formed material with material that may be superficially similar but which cannot be analyzed linguistically, it is possible to address the question of whether the brain bases for sign language processing are the same as those for the processing of other visual manual actions. In one study, MacSweeney and colleagues contrasted BSL utterances with gestures derived from TicTac, the gestural code used by racetrack bookmakers to signal betting odds to each other (MacSweeney et al. 2004). The stimuli were modeled by a deaf native signer who constructed ‘sentences’ using hand gestures derived from TicTac codes, adding non-​ manual markers (facial gestures) to make these sequences more similar to BSL. Both types of input caused extensive activation throughout the left and right superior temporal lobe when compared to watching the model at rest. That is, much of the upper part of the temporal lobe is involved in attending to gestural displays whether these are linguistically structured or not. However, the brains of the signers who viewed the displays discriminated between the inputs:  BSL activated a left-​sided region located at the junction of the left posterior superior temporal gyrus and the supramarginal

581

582

Bencie Woll & David Vinson

gyrus in the parietal lobe (see Figure 25.6) much more than TicTac. This difference was not due to perceptual differences in the visual quality of the stimuli, because hearing people with no BSL knowledge showed no differences in activation between BSL and TicTac in this region. This region thus appears to be a language-​specific region that is not sensitive to the modality of the language it encounters.

Figure 25.6  Highlighted regions are those recruited to a greater extent by BSL perception than TicTac (nonsense gestures) in Deaf native signers. We interpret these regions as being particularly involved in language processing. Activation up to 5 mm below the cortical surface is displayed. Crosshairs are positioned at Talairach coordinates: X = −58, Y = −48, Z = 31. This is the junction of the inferior parietal lobule and the superior temporal gyrus (reprinted from MacSweeney et al. (2004) with permission)

Even though in this comparison, sign and meaningless gesture activated different cerebral networks, meaningful gestures (pantomimes) and spoken language are found to recruit overlapping areas. Xu et  al. (2009) found that comprehending pantomimes (e.g., opening a jar) and their spoken language equivalents both engaged left IFG and left posterior middle temporal gyrus (MTG). The authors suggest that these are part of a general semantic network for human communication when it comes to comprehension. Emmorey et al. (2010) also found very similar patterns of activation within bilateral posterior temporal cortex when deaf signers passively viewed pantomimed actions and ASL signs, but with evidence for greater activation in the left IFG when viewing ASL signs. In a more recent fMRI study, Emmorey et al. (2013) also found that viewing both location and motion expressions involving classifier constructions engaged bilateral superior parietal cortex (see Figure 25.7).

582

583

Gesture and sign

Figure 25.7  Areas with activation greater for processing location and motion expressions with classifier constructions than for processing lexical signs (reprinted from Emmorey et al. (2013) with permission)

Studies of sign language impairments and neural processing thus demonstrate, on the one hand, modality-​independent aspects of the brain’s processing of language; on  the other hand, there are also indications that the modality and/​or form (i.e., iconic) of the linguistic system may place specific demands on the neural mediation and implementation of language.

25.5  Conclusions: sign language Research on sign languages has shown that when the auditory channel is not available, language can exist within the visual modality alone, with many of the linguistic structures identified for spoken languages. Unlike spoken languages, however, where new languages are always derived from interactions of existing languages and language users, new sign languages can emerge when Deaf people communicate with each other, even if they have no access to previously existing signed or spoken languages; that new language system can then be transferred to new generations. This is possible because modern humans can scaffold a new sign language on gesture, even when no language is accessible to them. Thus, the cognitive architecture of language can be instantiated anew –​out of gesture –​ even in the absence of full conventionalized language input. Thus, the human capacity for language structure is not modality specific –​to some extent. However, although sign languages and spoken/​written languages share many features in terms of structure, processing and neural structure, it has become recently more evident that sign languages make use of iconic structures specifically available to the visual modality. Furthermore, there is also evidence that modality-​specific brain regions might subserve such modality-​specific structures. Use of the visual modality for linguistic expressions and communication is pervasive both in spoken and sign languages and is inevitable when we think of how languages evolved, emerge anew, and are acquired in a face-​to-​face context. Understanding the role the visual modality plays in language through sign and gesture (in signed or spoken language) is necessary for our understanding of linguistic structure as well as the cognitive and neural architecture of language. By reviewing the role of gesture and sign language in language structure, processing and neural correlates, we have aimed to offer a joint

583

584

Bencie Woll & David Vinson

perspective towards understanding the role that the visual modality plays in both spoken and signed language. The first insight from looking at both co-​speech gestures and sign languages is that the visual modality can subserve similar linguistic functions and structures to those found in spoken languages –​regardless of whether such visual structures accompany spoken language or whether the visual modality takes the whole burden of language and communication. In spoken languages, gestures are semantically, pragmatically and syntactically, as well as at the discourse level, integrated into the linguistic structures. In sign languages, the visual modality alone can subserve all levels of linguistic structure such as phonology, morphology, syntax, and the lexicon. Thus, the visual modality can pattern and function linguistically in similar ways to those we see expressed through speech. Cognitive and neural processing of visual expressions, as in gesture or sign language, also bears similarities to those of spoken/​written structures. Gestures in production and comprehension are influenced by the processing stages of spoken languages (and vice-​ versa). Comprehension of gestures recruits similar brain areas to those used by spoken languages (i.e., the left-​lateralized fronto-​temporal network). Similar cognitive and neural processes are involved in processing both sign and spoken languages. There are also similarities between sign and gesture in terms of their cognitive and neural underpinnings (for reviews, see Emmorey & Ӧzyürek (2014) and Ӧzyürek (2014)). These findings suggest first, that gesture should not be excluded from spoken language research, as this misses a great deal of the structures and processes involved in formulating a linguistic message. Second, communication in the visual modality can be supported by cognitive/​neural processes that are not specific to any modality. However, the additional insight we get by looking at the visual modality is that both sign language and gesture can also reveal modality-​specific characteristics (i.e., those which cannot be expressed through the speech channel) and in similar ways to each other. The visual modality allows visible, analogue, gradient, imagistic, and indexical representations to be expressed together with categorical and descriptive ones (in both spoken and sign languages). As such, it allows descriptive and categorical aspects of language to be grounded in the here-​and-​now to convey the Clarkian (2016) ‘depictive’ and ‘indicating’ functions. This observation indicates that we need to widen our notion of language and its processing possibilities to modality-​specific structures visible in both gesture and sign (Perniss et al. 2010; Goldin-​Meadow & Brentari 2015). Finally, such iconic and analogue, gradient representations also recruit modality-​ specific brain areas –​shared by both gesture and sign –​outside of the ‘classical’ language network as described above –​as can be seen in the greater recruitment of parietal areas and the right hemisphere in spatial language processing in sign languages. Sign language perception also recruits areas overlapping with those involved in pantomime perception. These show that the cognitive and neural architecture of language is broader than is traditionally assumed and encompasses modality-​specific and ‘embodied’ structures and representations –​not only the abstract, categorical and arbitrary ones. All in all, the review provided in this chapter suggests that our understanding of the structures, processes, and neural architecture of language based on data from spoken languages alone needs to be updated if we wish to fully characterize our linguistic capacity and its cognitive and neural architecture. Recent findings have challenged the views that sign language and spoken language are structured and processed identically, and that sign languages do not share similarities to gestures or iconic representations. In fact,

584

585

Gesture and sign

there is growing research showing how iconicity (motivated form-​meaning mappings) may also be a unique characterizing feature of spoken language once non-​European languages are considered, such as Japanese or several African languages which contain many more iconic words in their lexicons than European languages (e.g., Dingemanse et al. 2015). Future research is likely to characterize in more detail the modality-​specific and modality-​independent aspects of sign language, its cognitive underpinnings and neural correlates. Research on sign languages not of European origin is also needed to generalize the current findings, mostly based on ASL and BSL, to other sign languages. Finally, we hope that comparisons of spoken and signed languages will also increasingly include audiovisual speech and gesture, rather than only comparing sign languages to heard speech and writing (Goldin-​Meadow & Brentari 2015; Perniss et al. 2015a, 2015b), so that we can more fully understand the modality-​specific as well as the modality-​ independent nature of our language capacity.

Notes 1 Note that gradient expressions can be also realized and thus depicted using the speech channel; for example, in loooong, the extended duration of the vowel implies the meaning ‘very long’ (Okrent 2002). 2 All examples are from BSL unless otherwise stated. 3 It should be noted, however, that simultaneity with two hands is an option exercised differently by different sign languages (Saeed et al. 2000; Perniss et al. 2015).

References Atkinson, Jo, Jane Marshall, Bencie Woll, & Alice Thacker. 2005. Testing comprehension abilities in users of British Sign Language following CVA. Brain and Language 94(2). 233–​248. Bavelas, Janet B., Nicole Chovil, Douglas A. Lawrie, & Allan Wade. 1992. Interactive gestures. Discourse Processes 15. 469–​489. Beattie, Geoffrey & Heather Shovelton. 1999. Do iconic hand gestures really contribute anything to the semantic information conveyed by speech? An experimental investigation. Semiotica 123.  1–​30. Bernardis, Paolo & Maurizio Gentilucci. 2006. Speech and gesture share the same communication system. Neuropsychologica 44(2). 178–​190. Bosworth, Raine G. & Karen Emmorey. 2010. Effects of iconicity and semantic relatedness on lexical access in American Sign Language. Journal of Experimental Psychology: Learning, Memory, and Cognition 36(6). 15–​73. Brentari, Diane (ed.). 2010. Sign languages (Cambridge language surveys). Cambridge: Cambridge University Press. Carreiras, Manuel, Eva Gutiérrez-​Sigut, Silvia Baquero, & David Corina. 2008. Lexical processing in Spanish Sign Language (LSE). Journal of Memory and Language 58. 100–​122. Chu, Mingyuan & Sotaro Kita. 2016. Co-​thought and co-​speech gestures are generated by the same action generation process. Journal of Experimental Psychology: Learning Memory and Cognition 42(2). 257–​270. Chu, Mingyuan, Antje Meyer, Lucy Foulkes, & Sotaro Kita. 2014. Individual differences in frequency and saliency of speech-​accompanying gestures:  The role of cognitive abilities and empathy. Journal of Experimental Psychology: General 143(2). 694–​709. Clark, Herbert H. 1996. Using language. Cambridge: Cambridge University Press. Clark, Herbert H. 2016. Depicting as a method of communication. Psychological Review 123. 324–​347. Coppola, Marie & Elissa L. Newport. 2005. Grammatical subjects in home sign: Abstract linguistic structure in adult primary gesture systems without linguistic input. Proceedings of the National Academy of Sciences 102(52). 19249–​19253.

585

586

Bencie Woll & David Vinson Corina, David P. & Karen Emmorey. 1993. Lexical priming in American Sign Language. Paper presented at Annual Conference of the Linguistic Society of America, Philadelphia. Corina, David, Howard Poizner, Ursula Bellugi, Todd Feinberg, Dorothy Dowd, & Lucinda O’Grady-​Batch. 1992. Dissociation between linguistic and nonlinguistic gestural systems:  A case for compositionality. Brain and Language 43. 414–​447. Debreslioska, Sandra, Aslı Ӧzyürek, Marianne Gullberg, & Pamela M. Perniss. 2013. Gestural viewpoint signals referent accessibility. Discourse Processes 50(7). 431–​456. Defina, Rebecca. 2016. Do serial verb constructions describe single events? A study of co-​speech gestures in Avatime. Language 92(4). 890–​910. Dingemanse, Mark, Damián E. Blasi, Gary Lupyan, Morten H. Christiansen, & Padraic Monaghan. 2015. Arbitrariness, iconicity and systematicity in language. Trends in Cognitive Sciences 19(10). 603–​615. Dudis, Paul. 2004. Body partitioning and real space blends. Cognitive Linguistics 15(2). 223–​238. Dye, Matthew W.G. & Shui-​I Shih. 2006. Phonological priming in British Sign Language. In Louis M. Goldstein, Douglas H. Whalen, & Catherine T. Best (eds.), Laboratory Phonology 8, 243–​ 263. Berlin: Mouton de Gruyter. Emmorey, Karen. 1991. Repetition priming with aspect and agreement morphology in American Sign Language. Journal of Psycholinguistic Research 20(5). 365–​388. Emmorey, Karen. 2002. Language, cognition, and the brain: Insights from sign language research. Mahwah, NJ: Lawrence Erlbaum. Emmorey, Karen. 2014. Iconicity as structure mapping. Philosophical Transactions of the Royal Society B: Biological Sciences 369: 20130301. Emmorey, Karen, Thomas Grabowski, Stephen McCullough, Hanna Damasio, Laurie Ponto, Richard Hichwa, & Ursula Bellugi. 2004. Motor-​iconicity of sign language does not alter the neural systems underlying tool and action naming. Brain and Language 89(1). 27–​37. Emmorey, Karen, Stephen McCullough, Sonya Mehta, Laura L.B. Ponto, & Thomas J. Grabowski. 2013. The biology of linguistic expression impacts neural correlates for spatial language. Journal of Cognitive Neuroscience 25(4). 517–​533. Emmorey, Karen & Aslı Ӧzyürek. 2014. Language in our hands:  Neural underpinnings of sign language and co-​speech gesture. In Michael S. Gazzaniga & George R. Mangun (eds.), The cognitive neurosciences (5th edition), 657–​666. Cambridge, MA: MIT Press. Emmorey, Karen, Jiang Xu, Patrick Gannon, Susan Goldin-​Meadow, & Allen Braun. 2010. CNS activation and regional connectivity during pantomime observation:  No engagement of the mirror neuron system for deaf signers. Neuroimage 49. 994–​1005. Engberg-​Pedersen, Elisabeth. 1993. Space in Danish Sign Language:  The semantics and morphosyntax of the use of space in a visual language. Hamburg: Signum. Floyd, Simeon. 2016. Modally hybrid grammar? Celestial pointing for time-​of-​day reference in Nheengatú. Language 92(1). 31–​64. Goldin-​Meadow, Susan. 2003. The resilience of language: What gesture creation in deaf children can tell us about how all children learn language. New York: Psychology Press. Goldin-​Meadow, Susan & Diane Brentari. 2015. Gesture, sign and language: The coming of age of sign language and gesture studies. Behavioral and Brain Sciences 5. 1–​82. Gu, Yan, Lisette Mol, Marieke Hoetjes, & Marc Swerts. 2017. Conceptual and lexical effects on gestures: The case of vertical spatial metaphors for time in Chinese. Language, Cognition and Neuroscience 32(8). 1048–​1063. Hagoort, Peter & Jos van Berkum. 2007. Beyond the sentence given. Philosophical Transactions of the Royal Society. Series B: Biological Sciences 362. 801–​811. Hall, Matthew L., Victor S. Ferreira, & Rachel I. Mayberry. 2015. Syntactic priming in American Sign Language. PloS One 10(3). e0119611. Haviland, John B. 1993. Anchoring, iconicity, and orientation in Guugu Yimidhirr pointing gestures. Journal of Linguistic Anthropology 3(1). 3–​45. Hellige, Joseph B. 1993. Hemispheric asymmetry:  What’s right and what’s left. Cambridge, MA: Harvard University Press. Hickok, Gregory, Tracy Love-​Geffen, & Edward S. Klima. 2002. Role of the left hemisphere in sign language comprehension. Brain and Language 82(2). 167–​178.

586

587

Gesture and sign Hickok, Gregory, Herbert Pickell, Edward Klima, & Ursula Bellugi. 2009. Neural dissociation in the production of lexical versus classifier signs in ASL: Distinct patterns of hemispheric asymmetry. Neuropsychologia 47. 382–​387. Hohenberger Annette, Daniela Happ, & Helen Leuninger. 2002. Modality-​dependent aspects of signed language production: Evidence from slips of the hands and their repairs in German Sign Language. In Richard P. Meier, Kearsy Cormier, & David Quinto-​Pozos (eds.), Modality and structure in signed and spoken language, 112–​142. Cambridge: Cambridge University Press. Hohenberger, Annette & Helen Leuninger. 2012. Production. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 711–​738. Berlin: De Gruyter Mouton. Iverson, Jana M. & Susan Goldin-​Meadow. 1998. Why people gesture when they speak. Nature 396. 228. Johnston, Trevor & Adam Schembri. 2010. Variation, lexicalization and grammaticalization in signed languages. Langage et Société (1). 19–​35. Kegl, Judy, Ann Senghas, & Marie Coppola. 1999. Creation through contact: Sign language emergence and sign language change in Nicaragua. In Michel DeGraff (ed.), Language creation and language change, 179–​237. Cambridge, MA: MIT Press. Kelly, Spencer D., Dale J. Barr, Ruth Breckinridge Church, & Katheryn Lynch. 1999. Offering a hand to pragmatic understanding:  The role of speech and gesture in comprehension and memory. Journal of Memory and Language 40. 577–​592. Kelly, Spencer D., Aslı Ӧzyürek, & Eric Maris. 2010. Two sides of the same coin: Speech and gesture mutually interact to enhance comprehension. Psychological Science 21. 260–​267. Kendon, Adam. 1986. Some reasons for studying gesture. Semiotica 62. 1–​28. Kendon, Adam. 2004. Gesture: Visible action as utterance. Cambridge: Cambridge University Press. Kendon, Adam. 2014. Semiotic diversity in utterance production and the concept of ‘language’. Philosophical Transactions of the Royal Society B 369: 20130293. Kita, Sotaro (ed.). 2003. Pointing:  Where language, culture, and cognition meet. Mahwah, NJ: Lawrence Erlbaum. Kita, Sotaro. 2009. Cross-​cultural variation of speech-​accompanying gesture: A review. Language and Cognitive Processes 24(2). 145–​167. Kita, Sotaro & Asli Özyürek. 2003. What does cross-​linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language 48(1). 16–​32. Klima, Edward & Ursula Bellugi. 1979. The signs of language. Cambridge MA:  Harvard University Press. Krauss, Robert M., Yihsiu Chen, & Rebecca F. Gottesman. 2000. Lexical gestures and lexical access:  A process model. In David McNeill (ed.), Language and gesture, 261–​283. Cambridge: Cambridge University Press. Liddell, Scott. 1990. Four functions of a locus: Re-​examining the structure of space in ASL. In Ceil Lucas (ed.), Sign language research:  Theoretical issues, 176–​198. Washington, DC:  Gallaudet University Press. Liddell, Scott K. 2003. Grammar, gesture and meaning in American Sign Language. Cambridge: Cambridge University Press. van Loon, Esther, Roland Pfau, & Markus Steinbach. 2014. The grammaticalization of gestures in sign languages. In Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill, & Sedinha Tessendorf (eds.), Body –​language –​communication: An international handbook on multimodality in human interaction, 2133–​2149. Berlin: De Gruyter Mouton. MacSweeney, Mairéad, Ruth Campbell, Bencie Woll, Vincent Giampietro, Anthony S. David, Philip K. McGuire, & Michael J. Brammer. 2004. Dissociating linguistic and non-​linguistic gestural communication in the brain. Neuroimage 22(4). 1605–​1618. MacSweeney, Mairéad, Cheryl M. Capek, Ruth Campbell, & Bencie Woll. 2008. The signing brain: The neurobiology of sign language. Trends in Cognitive Science 12. 432–​440. MacSweeney, Mairéad, Bencie Woll, Ruth Campbell, Philip McGuire, Anthony S. David, Steven Williams, & Michael Brammer. 2002. Neural systems underlying British Sign Language and audio-​visual English processing in native users. Brain 125. 1583–​1593.

587

588

Bencie Woll & David Vinson Marshall, Jane, Joanna Atkinson, Elaine Smulovitch, Alice Thacker, & Bencie Woll. 2004. Aphasia in a user of British Sign Language: Dissociation between gesture and sign. Cognitive Neuropsychology 21(5). 537–​555. Mayberry, Rachel I. 2010. Early language acquisition and adult language ability: What sign language reveals about the critical periods. In Marc Marschark & Patricia E. Spencer (eds.), The Oxford handbook of Deaf studies, language, and education (2nd edition), 281–​291. Oxford: Oxford University Press. McNeill, David. 1992. Hand and mind: What gestures reveal about thought. Chicago: University of Chicago Press. McNeill, David. 2005. Gesture and thought. Chicago: University of Chicago Press. McNeill, David, Justine Cassell, & Elena Levy. 1993. Abstract deixis. Semiotica 95. 5–​19. Meier, Richard P. 2002. Why different, why the same? Explaining effects and non-​effects of modality upon linguistic structure in sign and speech. In Richard P. Meier, Kearsy Cormier, & David G. Quinto-​Pozos (eds.), Modality and structure in signed and spoken languages, 1–​25 Cambridge: Cambridge University Press. Meier, Richard P., Claude E. Mauk, Adrianne Cheek, & Christopher J. Moreland. 2008. The form of children’s early signs: Iconic or motoric determinants? Language Learning and Development 4.  63–​98. Morgan, Gary & Bencie Woll. 2007. Understanding sign language classifiers through a polycomponential approach. Lingua 117(7). 1159–​1168. Morrel-​Samuels, Palmer & Robert M. Krauss. 1992. Word familiarity predicts temporal asynchrony of hand gestures and speech. Journal of Experimental Psychology: Learning, Memory and Cognition 18. 615–​623. Müller, Cornelia. 2009. Gesture and language. In Kirsten Malmkjaer (ed.), The Routledge linguistics encyclopedia, 214–​217. London: Routledge. Neville, Helen J., Daphne Bavelier, David Corina, Josef Rauschecker, Avi Karni, Anil Lalwani, Allen Braun, Vince Clark, Peter Jezzard, & Robert Turner. 1998. Cerebral organization for language in deaf and hearing subjects: Biological constraints and effects of experience. Proceedings of the National Academy of Sciences USA 95. 922–​929. Nyst, Victoria. 2012. Shared sign languages. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 552–​574. Berlin: De Gruyter Mouton. Okrent, Arika. 2002. A modality-​free notion of gesture and how it can help us with the morpheme vs. gesture question in sign language linguistics (or at least give us some criteria to work with). In Richard P. Meier, Kearsy Cormier, & David Quinto-​Pozos (eds.), Modality and structure in signed and spoken languages. 175–​198. Cambridge: Cambridge University Press. Özçalışkan, Şeyda, Ché Lucero, & Susan Goldin-​Meadow. 2016. Is seeing gesture necessary to gesture like a native speaker? Psychological Science 27(5). 737–​747. Ӧzyürek, Aslı. 2012. Gesture. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook, 626–​646. Berlin: De Gruyter Mouton. Ӧzyürek, Aslı. 2014. Hearing and seeing meaning in speech and gesture: Insights from brain and behaviour. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 369: 20130296. Ӧzyürek, Aslı, Reyhan Furman, & Susan Goldin-​Meadow. 2014. On the way to language: Event segmentation in homesign and gesture. Journal of Child Language 42. 64–​94. Ӧzyürek, Aslı, Sotaro Kita, Shanley Allen, Reyhan Furman, & Amanda Brown. 2005. How does linguistic framing of events influence co-​speech gestures? Insights from crosslinguistic variations and similarities. Gesture 5(1/​2). 219–​240. Ӧzyürek, Aslı, Roel M. Willems, Sotaro Kita, & Peter Hagoort. 2007. On-​line integration of semantic information from speech and gesture:  Insights from event-​related brain potentials. Journal of Cognitive Neuroscience 19(4). 605–​616. Padden, Carol. 1988. Interaction of morphology and syntax in American Sign Language. New York: Garland. Peeters, David, Peter Hagoort, & Aslı Ӧzyürek. 2015. Electrophysiological evidence for the role of shared space in online comprehension of spatial demonstratives. Cognition 136. 64–​84. Peeters, David & Aslı Ӧzyürek. 2016. This and that revisited: A social and multimodal approach to spatial demonstratives. Frontiers in Psychology 7. 222.

588

589

Gesture and sign Peeters, David, Tineke M. Snijders, Peter Hagoort, & Aslı Ӧzyürek. 2017. Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues. Neuropsychologia 95. 21–​29. Perniss, Pamela & Aslı Ӧzyürek. 2015. Visible cohesion: A comparison of reference tracking in sign, speech, and co-​speech gesture. Topics in Cognitive Science 7(1). 36–​60. Perniss, Pamela, Robin L. Thompson, & Gabriella Vigliocco. 2010. Iconicity as a general property of language: Evidence from spoken and signed languages. Frontiers in Psychology 1. 1–​15. Perniss, Pamela, Aslı Ӧzyürek, & Gary Morgan. 2015a. The influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture. Topics in Cognitive Science 7(1). 2–​11. Perniss, Pamela, Inge Zwitserlood, & Aslı Ӧzyürek. 2015b. Does space structure spatial language? A comparison of spatial expression across sign languages. Language 91(3). 611–​641. Pfau, Roland, Martin Salzmann, & Markus Steinbach. 2018. The syntax of sign language agreement: Common ingredients, but unusual recipe. Glossa: A Journal of General Linguistics 3(1): 107.  1–​46. Pfau, Roland & Markus Steinbach. 2006. Modality-​independent and modality-​specific aspects of grammaticalization in sign languages (Linguistics in Potsdam 24). Potsdam: Universitäts-​Verlag. Pfau, Roland & Markus Steinbach. 2011. Grammaticalization in sign languages. In Heiko Narrog & Bernd Heine (eds.), The Oxford handbook of grammaticalization, 683–​695. Oxford: Oxford University Press. Poizner, Howard, Ursula Bellugi, & Ryan D. Tweney. 1981. Processing of formational, semantic, and iconic information in American Sign Language. Journal of Experimental Psychology: Human Perception and Performance 7 (5). 1146–​1159. Poizner, Howard, Edward S. Klima, & Ursula Bellugi. 1987. What the hands reveal about the brain. Cambridge MA: MIT Press. Orfanidou, Eleni, Robert Adam, Gary Morgan, & James M. McQueen. 2010. Recognition of signed and spoken language:  Different sensory inputs, the same segmentation procedure. Journal of Memory and Language 62(3). 272–​283. Quinto-​Pozos, David & Fey Parrill. 2015. Signers and co-​speech gesturers adopt similar strategies for portraying viewpoint in narratives. Topics in Cognitive Science 7. 12–​35. de Ruiter, Jan. 2000. The production of gesture and speech. In David McNeill (ed.), Language and gesture, 248–​311. Cambridge: Cambridge University Press. de Ruiter, Jan, Adrian Bangerter, & Paula Dings. 2012. The interplay between gesture and speech in the production of referring expressions:  Investigating the tradeoff hypothesis. Topics in Cognitive Science 4(2). 232–​248. Saeed, John I., Rachel Sutton-​Spence, & Lorraine Leeson. 2000. Constituent order in Irish Sign Language and British Sign Language:  A preliminary examination. Poster presented at the 7th International Conference on Theoretical Issues in Sign Language Research (TISLR7), Amsterdam. San Jose-​Robertson, Lucia, David Corina, Debra Ackerman, Andre Guillemin, & Allen R. Braun. 2004. Neural systems for sign language production: Mechanisms supporting lexical selection, phonological encoding, and articulation. Human Brain Mapping 23. 156–​167. Sandler, Wendy & Diane Lillo-​ Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press. Sandler, Wendy, Irit Meir, Carol Padden, & Mark Aronoff. 2005. The emergence of grammar: Systematic structure in a new language. Proceedings of the National Academy of Sciences 102(7). 2661–​2665. Schembri, Adam, Kearsy Cormier, & Jordan Fenlon. 2018. Indicating verbs as typologically unique constructions: Reconsidering verb ‘agreement’ in sign languages. Glossa: A Journal of General Linguistics 3(1): 89. 1–​40. Schembri, Adam, Caroline Jones, & Denis Burnham. 2005. Comparing action gestures and classifier verbs of motion: Evidence from Australian Sign Language, Taiwan Sign Language, and nonsigners’ gestures without speech. Journal of Deaf Studies & Deaf Education 10(3). 272–​290. Senghas, Ann, Aslı Ӧzyürek, & Sotaro Kita. 2004. Children creating core properties of language: Evidence from an emerging sign language in Nicaragua. Science 305(5691). 1779–​1782. Slobin, Dan, Nini Hoiting, Marlon Kuntze, Reyna Lindert, Amy Weinberg, Jennie Pyers, Michelle Anthony, Yael Biederman, & Helen Thumann. 2003. A cognitive/​functional perspective on the

589

590

Bencie Woll & David Vinson acquisition of ‘classifiers’. In Karen Emmorey (ed.), Perspectives on classifier constructions in sign languages, 297–​310. Mahwah, NJ: Lawrence Erlbaum. Stokoe, William. 1960. Sign language structure: An outline of the visual communication systems of the American Deaf. Silver Spring, MD: Linstok Press. Strickland, Brent, Carlo Geraci, Emmanuel Chemla, Philippe Schlenker, Meltem Kelepir, & Roland Pfau. 2015. Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases. Proceedings of the National Academy of Sciences 112(19). 5968–​5973. Supalla, Ted. 1986. The classifier system in American Sign Language. In Colette Craig (ed.), Noun classes and categorization, 181–​214. Amsterdam: John Benjamins. Sutton-​Spence, Rachel & Bencie Woll. 1999. The linguistics of British Sign Language: an introduction. Cambridge: Cambridge University Press. Talmy, Leonard. 1985. Lexicalization patterns:  Semantic structure in lexical forms. In Timothy Shopen (ed.), Language typology and syntactic description. Vol. III: Grammatical categories and the lexicon, 57–​149. Cambridge: Cambridge University Press. Taub, Sarah F. 2001. Language from the body: Iconicity and metaphor in American Sign Language. Cambridge: Cambridge University Press. Thompson, Robin, Karen Emmorey, & Tamar H. Gollan. 2005. ‘Tip of the fingers’ experiences by deaf signers. Psychological Science 16(11). 856–​860. Thompson, Robin L., David P. Vinson, & Gabriella Vigliocco. 2010. The link between form and meaning in British sign language:  Effects of iconicity on phonological decisions. Journal of Experimental Psychology: Learning, Memory and Cognition 36. 1017–​1027. Thompson, Robin L., David P. Vinson, Bencie Woll, & Gabriella Vigliocco. 2012. The road to language learning is iconic:  Evidence from British Sign Language. Psychological Science 23(12). 1443–​1448. Tversky, Barbara. 2011. Visualizations of thought. Topics in Cognitive Science 3. 499–​535. Vermeerbergen, Myriam, Lorraine Leeson, & Onno A. Crasborn (eds.). 2007. Simultaneity in signed languages: Form and function. Amsterdam: John Benjamins. Vigliocco, Gabriella, Pamela Perniss, & David Vinson. 2014. Language as a multimodal phenomenon: Implications for language learning, processing, and evolution. Philosophical Transactions of the Royal Society B 369. 2013–​0292. Wagner, Petra, Zofia Malisz, & Stefan Kopp. 2014. Gesture and speech in interaction: An overview. Speech Communication 57. 209–​232. Wilbur, Ronnie B. 2008. Complex predicates involving events, time, and aspect:  Is this why sign languages look so similar? In Josep Quer (ed.), Signs of the time. Selected papers from TISLR 8, 217–​250. Hamburg: Signum Press. Wilcox, Sherman. 2004. Gesture and language. Cross-​linguistic and historical data from signed languages. Gesture 4(1). 43–​73. Wilcox, Sherman. 2007. Routes from gesture to language. In Elena Pizzuto, Paola Pietrandrea, & Raffaele Simone (eds.), Verbal and signed languages. Comparing structures, constructs, and methodologies, 107–​131. Berlin: Mouton de Gruyter. Woll, Bencie & Paddy Ladd. 2010. Deaf communities. In Marc Marschark & Patricia E. Spencer (eds.), The handbook of Deaf studies, language and education (2nd edition), 159–​172. Oxford: Oxford University Press. Xu, Jiang, Patrick J. Gannon, Karen Emmorey, Jason F. Smith, & Allen R. Braun. 2009. Symbolic gestures and spoken language are processed by a common neural system. Proceedings of the National Academy of Sciences 106(49). 20664–​20669. Yap, De-​Fu, Wing-​Chee So, Ju-​Min M. Yap, & Ying-​Quan Tan. 2011. Iconic gestures prime words. Cognitive Science 35. 171–​183.

590

591

26 INFORMATION STRUCTURE Theoretical perspectives Vadim Kimmelman & Roland Pfau

26.1  Introduction Information structure is a field of linguistics covered in numerous books and articles. Information structure in sign languages has also been investigated almost from the first days of sign linguistics; however, as is often the case, most of the available studies focus on a very small number of sign languages, and among these, American Sign Language (ASL) is the one most prominently represented. Nevertheless, some interesting results have been obtained that can be relevant for the theory of information structure in general. In this chapter, we are going to use the terminology commonly used in the information structure literature:  in particular, topic, focus, contrast, and emphasis. More specific terms are explained throughout the chapter. A reader not familiar with the field of information structure is encouraged to consult Krifka (2008) for a concise and accessible overview. Recently, two handbook chapters devoted to information structure in sign languages have appeared: Wilbur (2012) and Kimmelman & Pfau (2016). Moreover, Kimmelman (2019) is a book-​length study of information structure in sign languages, investigating information structure notions based on corpus data from Sign Language of the Netherlands (NGT) and Russian Sign Language (RSL). In this chapter, we thus try to avoid overlap with these works to the extent possible and report new results and new perspectives on the topic. In addition, we go beyond previous overviews by also addressing generative approaches to certain aspects of information structure in sign languages. In Section 26.2, we discuss topic and focus marking in sign languages. Section 26.3 is devoted to recent studies that shed light on a number of particularly interesting phenomena, some of which appear to be modality-​specific. In Section 26.4, we offer a brief overview of experimental research on information structure. Section 26.5 concludes the chapter.

26.2  Information structure: description and formalization We start by providing an overview of how different types of topics (Section 26.2.1) and foci (Section 26.2.2) are marked in different sign languages. To that end, we address syntactic position, non-​manual (prosodic) marking, and for foci also manual marking. As 591

592

Vadim Kimmelman & Roland Pfau

for syntactic positioning, we further address, in Section 26.2.3, how the distribution of topic and focus has been accounted for within a hierarchically organized phrase structure.

26.2.1  Strategies for topic marking Topic marking is quite common and salient in sign languages, and this probably explains why topic marking has been described from the early days of sign linguistics, first for ASL (Friedman 1976; Ingram 1978) and later also for other sign languages. However, as is also the case for spoken languages, some terminological confusion has emerged in the literature. The most confusing term is the term ‘topic’ itself, which is sometimes used to refer to the pragmatic function, but, at the same time, is often used to refer to the syntactic operation of topicalization, whereby a constituent is moved to the sentence-​initial position, but which pragmatically can be used to mark either topic or (contrastive) focus (see, for instance, Chen Pichler (2010) as an example of such use of the term ‘topic’). The term ‘topicalization’ itself is confusing due to the natural implication that topicalization marks topics, which, however, is not necessarily the case. In this chapter, we use the term ‘topic’ without further explanation to refer to the pragmatic function only. Many sign languages have been shown to mark topics by means of syntactic and prosodic strategies, and the markers seem strikingly similar across different sign languages: sentence-​initial position, a prosodic break separating the topic from the rest of the sentence (this can be a change of non-​manuals, a manual pause/​hold, and/​or an eye blink, Nespor & Sandler (1999)), and specific non-​manual markers, typically eyebrow raise and head movement (see Sze (2011) for a comparison of non-​manual markers across different sign languages). The combination of various markers is illustrated in (1) from Israeli Sign Language (ISL).1   br   squint   hf blink

(1) 

CAKE,

IX1 EAT.UP DEPLETE

‘I ate the cake completely.’ (ISL, adapted from Nespor & Sandler 1999: 165) An important theoretical issue is the relation between these three types of markers. Topics that are in the sentence-​initial position are also always prosodically isolated from the rest of the sentence. It is not clear whether in situ topics can also be prosodically isolated; a crucial example would be an object topic in situ separated by a prosodic boundary from the rest of the sentence, but such examples do not seem to occur. Non-​manual marking, however, is not a necessity. For several sign languages (ASL: Todd 2008; Hong Kong Sign Language (HKSL): Sze 2008; RSL and NGT: Kimmelman 2015; Italian Sign Language (LIS): Calderone 2020), it has been shown that topics are not always marked with raised eyebrows, although they might be. Finally, if a topic is marked by a non-​manual marker, this automatically implies that it is prosodically isolated, since the marker accompanies the topic only and thus distinguishes it from the rest of the sentence.

592

593

Information structure

Semantically, a common distinction is between aboutness topics and scene-​setting topics (Sze 2008; Kimmelman 2014). Despite the functional differences –​aboutness topics are typically old information and refer to the argument of the verb, while scene-​setting topics are often new information and adjuncts –​they are marked by similar markers at least in some sign languages. Syntactically, a distinction is often made between three types of topics (similar to spoken languages):  hanging topics, left dislocation, and topicalization (see Wilbur (2012) for further discussion). Hanging topics are not integrated syntactically into the rest of the sentence, and often refer to a hypernym of an element within the sentence (e.g., As for vegetables, I like tomatoes). Left-​dislocated topics are also not syntactically integrated, but are co-​referent with a pronoun within the sentence (e.g., Tomatoes, I  like them). In the case of topicalization, the topic is co-​referent with a gap in the sentence (e.g., Tomatoes, I  like ø). As already mentioned, at least topicalization and probably also left dislocation is also used for pragmatic functions other than topics. Since hanging topics and left-​dislocated topics are not integrated into the rest of the sentence, they are usually analyzed as being base-​generated in sentence-​initial position, while topicalized topics are usually analyzed as moved, leaving behind a trace in their original sentence-​internal position. It is worth noting that it has been claimed for ASL (Aarons 1996) and German Sign Language (DGS; Bross 2018) that base-​generated and moved topics are accompanied by different non-​manual markers. Elaborate syntactic analyses postulating separate functional projections to host different types of topics and foci exist in the literature (e.g., Lillo-​Martin & de Quadros 2008); these will be addressed in Section 26.2.3. Another interesting syntactic way of marking topics is topic-​copying, discussed for NGT in Crasborn et al. (2009). In this language, a sentence-​final pronoun can be used to refer back to the topic of the sentence. Importantly, Crasborn et al. (2009) demonstrated that this pronoun can refer to both subject (2a) and object (2b) aboutness topics, and also to scene-​setting topics. Therefore, this pronoun is indeed an instance of topic marking and not a subject pronoun copy, as had previously been proposed for NGT by Bos (1995). (2)  a. 

GIRL IXleft, IXleft BOOK THROW.AWAY IXleft

‘That girl, she threw away the book.’ b.

BOOK IXright, IXleft THROW.AWAY IXright

‘He threw the book away.’

(NGT, adapted from Crasborn et al. 2009: 359) A theoretically important issue concerning topics in sign languages is whether (some) sign languages can be considered topic prominent (Li & Thompson 1976). The question is debatable because there is no universally accepted definition of topic prominence, also for spoken languages. For some sign languages, it has been argued that they are topic prominent (ASL: McIntire 1982), for others that they are not topic prominent (HKSL: Sze 2008; RSL and NGT:  Kimmelman 2019). Jantunen (2013) argued that Finnish Sign Language (FSL) is discourse-​oriented, which is a different notion not centered around topic marking and might better describe other sign languages as well.

593

594

Vadim Kimmelman & Roland Pfau

26.2.2  Strategies for focus marking Focus marking in sign languages has been studied somewhat less well than topic marking. However, some general patterns have been identified, and again, different sign languages show interesting similarities as well as variation when it comes to the details of focus realization (see, for instance, Wilbur 1996; Crasborn & van der Kooij 2013; Herrmann 2015; Kimmelman 2019). In general, and similar to what we observed for topics, focus in sign languages can be marked syntactically and prosodically. In addition, focus particles can be used to express focus-​related meanings (see Volk & Herrmann, Chapter 22, for further information). Syntactically, focus can be marked by topicalization (fronting), as mentioned in the previous section. For instance, in (3)  from ASL, the topicalized constituent receives contrastive focus.   top

(3)

JOHN NOT-​LIKE JANE.

MARY, IX3 LOVE

‘John doesn’t like Jane. Mary, he loves.’   (ASL, adapted from Aarons 1996: 76) Another common syntactic strategy for focus marking attested in various sign languages are cleft-​like constructions where focus and background are expressed by separate clauses in a bi-​clausal structure, as in (4).2 The first clause, which includes a wh-​element, contains background information; it is accompanied by brow raise and followed by a prosodic break. The focus constituent appears in the second clause (with empty copula). Note, however, that there is disagreement about the syntactic as well as pragmatic status of such constructions, which we address in Section 26.3.3.               br

(4)

IX1 DISLIKE WHAT /​

LEE POSS TIE

‘What I dislike is Lee’s tie.’ (ASL, Wilbur 1996: 210) In fact, Petronio (1991) and Wilbur (1999) have argued that focus in ASL has to move to a specific position to receive prosodic prominence (see Section 26.3.1), namely to the final position in a clause. Clefting is a strategy that a signer can use to place any constituent in the final position (while the background becomes the free relative clause). However, Schlenker et al. (2016) have demonstrated that in ASL, prosodic marking of the focused constituent can also happen in situ, thus arguing against obligatory movement for focus. In situ focus marking is also clearly attested in French Sign Language (Schlenker et al. 2016), DGS (Herrmann 2015), NGT (Crasborn & van der Kooij 2013), and RSL (Kimmelman 2019). Furthermore, in some sign languages, focus (more specifically, emphatic focus) can be marked by doubling the relevant constituent; for other sign languages, the relation between focus and doubling has been argued to be more complicated. We return to this issue in Section 26.3.4. Focus in sign languages can also be marked prosodically, be it by non-​manual markers and/​or prosodic modifications of manual signs (see Fenlon & Brentari, Chapter  4).

594

595

Information structure

Concerning non-​manual markers, two aspects are of interest. First, there is some overlap between non-​manual markers for topic and for focus. For instance, both in ASL (Wilbur & Patschke 1999) and in NGT (Crasborn & van der Kooij 2013), eyebrow raise is used in both functions. This has led Wilbur & Patschke (1999) to argue that eyebrow raise in ASL does not mark topics or foci but is instead a result of A’-​movement (also see Wilbur, Chapter 24, for discussion). However, there are also markers that are used for foci but not for topics. For instance, forward and backward body leans have been found to mark focus in ASL and NGT (Wilbur & Patschke 1998; van der Kooij et al. 2006). Second, as for focus, cross-​linguistic variation in the extent to which non-​manual markers are used has been observed. Kimmelman (2019) demonstrated that NGT uses a larger set of non-​ manuals than RSL to mark focus, and that non-​manual marking in NGT is employed more frequently. For instance, NGT, unlike RSL, uses eyebrow raise and head tilt for focus marking. Head nods are attested in both sign languages, but while NGT uses head nods for different types of focus, RSL only uses them to mark contrastive focus (see also Section 26.3.4). Concerning manual prosodic marking, all sign languages described so far modify signs in focus for them to be more stressed or prosodically prominent. For instance, focused signs in RSL may be longer, contain more repetitions, and a larger amplitude, and may be articulated higher than unfocused productions of the same signs (Kimmelman 2019). For ASL, different researchers reported different prosodic correlates of focus: for instance, Coulter (1990) found focused signs to be shorter, while Wilbur (1999) found them to be longer, and, according to Schlenker et al. (2016), focused signs are signed faster (which can potentially make them shorter) but also contain longer holds (which can potentially make them longer). Furthermore, various researchers noted that there is no unique manual prosodic marker that is used to mark focus in all cases. For instance, Kimmelman (2019) found that, in RSL and NGT, no single manual prosodic marker is obligatory, and that the presence of prosodic markers is influenced by phonology (i.e., whether the sign contains path or hand-​internal movement) and focus type, as well as some other factors that are currently unknown.

26.2.3  Information structure and the left periphery Since the seminal work by Rizzi (1997, 2001), many scholars working in the Generative tradition (Chomsky 1981, 1995) have assumed that three types of structural layers should be distinguished within a hierarchical phrase structure: a lexical layer (where lexical elements are merged), an inner functional layer (for functional categories like aspect and negation), and an outer functional layer. The latter layer has also been referred to as the ‘left periphery’ of the clause, and is argued to constitute the interface between a propositional content and the superordinate structure (i.e., a higher clause or the discourse). The hierarchically organized functional projections it contains replace what had previously been labeled Complementizer Phrase (CP). The fine structure of the left periphery is illustrated in Figure 26.1 (see also Figure 24.1 in Wilbur, Chapter 24, for a structure involving all three layers).

595

596

Vadim Kimmelman & Roland Pfau

Figure 26.1  Fine structure of the left periphery (based on Rizzi 1997, 2001)

As is evident, this structure contains dedicated positions for topics and focus. The assumption is that features hosted in the heads of TopP/​FocP attract topic and focus constituents into their specifiers (in situ focus, as discussed in the previous section, has to be accounted for in a different way). This is what we observe in (2) and (3) above. For some sign languages, it has been argued that the non-​manual markers involved are actually the overt realization of syntactic features residing in functional heads (e.g., Neidle et al. 2000; Pfau 2016; Bross 2018), and that they associate with the constituent in their specifier under Spec-​head agreement. The above structure also suggests that clause-​initial topic and focus constituents can be combined, and this is indeed what we observe in the ASL example in (5a). The Gungbe (Kwa, Benin) example in (5b) shows the same order, but in this language, the Top and Foc heads are lexicalized by dedicated particles.    top

(5)

a.

b.



  foc

FRUIT /​ BANANA/​JOHN LIKE MORE

‘As for fruit, John likes bananas best.’       (ASL, Lillo-​Martin & de Quadros 2008: 169) […] [dàn lɔ́] yà [Kòfí] wέ ún hù-​ì ná snake SPF[+def] TOP Kofi FOC 1SG kill-​3SG for ‘(I said that,) as for the specific snake, I killed it for Kofi.’                   (Gungbe, adapted from Aboh 2004: 291)

Moreover, the structure also allows for multiple topics, that is, topic stacking, an option that is illustrated in (6). In the NGT example in (6a), a left-​dislocated topic precedes a spatial scene-​setting topic (also note the clause-​final pronoun referring back to the former topic), while in the DGS example in (6b), a base-​generated (hanging) topic precedes a moved topic (topicalization).

596

597

Information structure



(6)

a.

    neutral

IXright PERSON

   

    tilted

  nod



TOMORROW AT.HOME,

  

IXright

NEWSPAPER READ IXright

   

  neutral

‘The man, tomorrow at home, he will read the newspaper.’         (NGT, Crasborn et al. 2009: 359) base-​top

b.

moved-​top

PEPPERi,  PAUL ti LIKE ‘As for vegetables, as for pepper, Paul likes (it).’    VEGETABLES

(DGS, Bross 2018: 62)

Certain constraints on the combination of topics have been proposed in the literature. Clearly, such constraints have implications for the nature of the topic phrases involved, which may have to be further specified with regard to what types of topic they can host (also note that Rizzi (1997) assumes that both TopPs in Figure 26.1 are recursive). We shall not go into details here, but only point out that for both ASL and DGS, it has been claimed (i) that two base-​generated topics may co-​occur (6a), and (ii) that base-​generated topics must precede moved topics (Aarons 1996; Bross 2018), that is, the reverse order in (6b) would yield an ungrammatical result. Bross (2018:  63) concluded that “the observations are in line with the idea that base-​generated frame setters are structurally higher than moved aboutness topics” (cf. Lillo-​Martin & de Quadros (2008); see also Calderone (2020) for a recent discussion of the left periphery in LIS).3 Rizzi (2001) introduced an interrogative phrase into the left periphery (see Figure 26.1), the specifier of which is claimed to attract interrogative clauses. The NGT examples in (7)  illustrate that topics may precede different types of interrogatives, as predicted. In (7b), the manual question particle occupies the head of InterP, the entire clause moves to SpecInterP, and subsequently, the argument topic BOOK moves further to the specifier of the higher TopP (see Bross (2018) for DGS).4

(7)

a.

     top

               y/​ n

HORSE INDEX3,

INDEX2 STROKE3 DARE^INDEX2

(NGT, Pfau 2016: 48) ‘As for the horse, do you dare to stroke it?’   top

b.

BOOK, 

        wh STEAL WHO Q-​PART

‘As for the book, who stole  it?’                 (NGT) A final issue we briefly address is the presence of a full-​fledged left periphery within embedded clauses. The Gungbe example in (5b) shows that complement clauses in this language may feature topics and foci. Similarly, embedded topics are allowed in ASL (8a) and NGT complement clauses. As for adjunct clauses, however, there is an ongoing debate on whether they contain a full-​fledged left periphery or not. For instance, while Maki et al. (1999) argue that embedded argument topicalization is impossible in (English and Japanese) adjunct clauses, Haegeman (2003) demonstrates that it is possible in certain types of English conditional clauses. The NGT example in (8b) suggests that embedded topicalization is indeed allowed in conditional clauses. Given that the conditional clause is introduced by the conjunction SUPPOSE, it is clear that the (moved) topic BOOK is located within the conditional clause (it may also precede the conditional clause). Note that conditional non-​manual marking in NGT also commonly involves raised eyebrows (Klomp 2019); the topic may additionally be marked by a head nod. 597

598

Vadim Kimmelman & Roland Pfau

  

(8)  a. 

top

TEACHER REQUIRE  MOTHERi, JOHN MUST LIPREAD ti

‘The teacher requires that mother, John must lipread.’ (ASL, Aarons 1996: 97)                  cond

b.

   neg

SUPPOSE BOOK INDEX3i INDEX2 ti READ,  SLEEP  CAN^NOT

‘If this book you read, you won’t be able to sleep.’

(NGT)

26.3  Information structure in the visual-​gestural modality: new directions We now turn to recent studies that investigate specific aspects of the realization of information structure across sign languages: the relation between focus and prominence, the realization of contrast, the analysis of question-​answer pairs (also known as wh-​clefts and rhetorical questions), doubling and its functions, as well as weak hand holds and their relation to discourse. An important component of our discussion is the impact of the visual-​gestural modality on the syntactic and prosodic encoding of information structure. While some of the phenomena we address are clearly modality-​specific (e.g., weak hand holds), others may also be attested in spoken languages but be typologically less common (e.g., doubling).

26.3.1  Focus and prominence In spoken languages, focus is almost universally associated with prosodic prominence (Zimmermann & Onea 2011). Gussenhoven (2004) offers an explanation for this pattern by formulating three biological codes, including the Effort Code, which states that speakers, in order to signal that the information they are conveying is important, use greater effort in articulation resulting in greater precision and wider excursion of pitch movement. This general tendency can be grammaticalized as prosodic (pitch-​related) marking of focus in spoken languages. Various researchers have argued that the same biological motivation applies to prosodic marking in sign languages (e.g., Pfau & Steinbach 2006; Sandler 2011). While the notion of pitch does not apply, signers can also use greater effort to articulate signs which will lead to higher velocity, larger amplitude, clearer boundaries, etc., and this effort can be interpreted as relating to new and important information, that is, focus. Crasborn & van der Kooij (2013) argue that this applies to both manual and non-​manual markers of focus in NGT: the presence of both types of markers requires more effort, which is why they are used to mark focus. However, it is not clear that all focus markers can be explained directly by the Effort Code. Schlenker et  al. (2016) note that forward leans or head nods used for focus are not a direct manifestation of greater effort (in comparison to, e.g., backward leans and headshakes which could have been used instead). Even more strikingly, Herrmann (2015) shows that, in DGS, the focused constituent is sometimes de-​accented, that is, while the rest of the sentence is accompanied by non-​manual markers, the focused constituent is not, as in (9). While this type of marking clearly helps the addressee to identify the focus, it does not fit the proposal that focus has to be articulated with greater effort (ht-​f = head tilt forward, aff-​hn = affirmative head nod). 598

599

Information structure

(9)

TIM

ht-​f,hn

aff-​hn

ALSO

BANANA EAT

‘Also Tim eats a banana.’    

(DGS, Herrmann 2015: 289)

To sum up, while Gussenhoven’s Effort Code can explain the general connection between prosody and focus in both modalities, it does not explain the details of language-​specific realization of focus in all cases in sign languages (which, by the way, is also true for spoken languages, see Gussenhoven 2004: 86–​88).

26.3.2  Contrast One of the theoretical questions that sign languages can contribute to is whether contrast is a notion orthogonal to focus, or whether all focus is contrastive to some degree (see, e.g., Repp 2016). Data from sign languages is relevant because some sign languages have dedicated markers of contrast, and yet, sometimes the difference between focus and contrast seems to be a matter of degree. The dedicated contrast markers described for several sign languages include the use of contrastive spatial location, realized as contrastive localization of referents, sideward body leans, and dominance reversal (Wilbur & Patschke 1998; van der Kooij et al. 2006; Kimmelman 2019; Navarrete-​González 2019). Crucially, these markers of contrast often spread over the whole contrasted clauses, and not only over the constituent in focus, as shown in (10), and they can thus be disassociated from focus.   left body lean

(10) 

CAT BITE [BOY]Focus



  right body lean

IX DOG [BITE GIRL]Focus

‘The cat bites the boy while the dog bites the girl.’ (RSL, Kimmelman 2019: 179) However, there are several complications with the interpretation of this phenomenon. First, it is unclear whether this type of marking can be applied when more than two alternatives are contrasted. Second, since this strategy is clearly modality-​specific, it is questionable whether it can be used as an argument for the separateness of contrast in spoken languages, too. In addition, it has been shown that contrastive and non-​contrastive focus can be marked by the same manual and non-​manual markers, the difference being one of degree (Herrmann 2015; Kimmelman 2019). For instance, in RSL and NGT, both types of focus can be marked by ellipsis and by several manual prosodic markers, even though their frequency differs (Kimmelman 2019: 244). These findings are more in line with the theory that contrastive focus is a subtype of focus, and not a combination of two independent information-​structural phenomena (but see Navarrete-​González (2019), who argues that contrast in Catalan Sign Language (LSC) is marked by dedicated non-​manual markers). Legeland et  al. (2018) further explored the realization of contrast in coordinated structures in NGT, based on corpus data. A basic assumption they make is that conjuncts in coordinate structures have a full-​fledged left periphery (see Section 26.2.3), as has also been argued by Zorzi (2018) for LSC. Thus, in the LSC gapping example in (11), the contrastive subject topics occupy SpecTopP, while the contrastive foci are realized in SpecFocP. 599

600

Vadim Kimmelman & Roland Pfau

(11) [C.TOPIC MARINA] [C.FOCUS COFFEE] PAY [C.TOPIC JORDI] [C.FOCUS CROISSANT] ‘Marina paid for a coffee and Jordi for a croissant.’      (LSC, Zorzi 2018: 78) Coordination in (11) is asyndetic, that is, no overt coordinator is used. Note that the two conjuncts are symmetric with respect to word order (the verb in the second conjunct being elided). In fact, symmetry (or parallelism) has been argued to be an important constraint on the structure of coordinations (e.g., Goodall 1987). Legeland et al. (2018: 60) therefore introduce a parallel structure constraint (PSC), which requires that “conjuncts must have a syntactically, semantically, and prosodically parallel structure”. As for syntactic parallelism, this rules out English structures like *Mary goes to the market but to the mall Jack. However, the NGT corpus data reveal that asymmetric coordination –​albeit the exception –​is attested in this language. In the asyndetic coordination in (12), for instance, constituent order is VO in the first conjunct, but OV in the second (CI = cochlear implant, S-​H = fingerspelled representation of slechthorend ‘hard-​of-​hearing’). (12)

CI [GO++3a S-​H SCHOOL] [HEARING SCHOOL GO3b]

‘Because of CI, (children) go to a hard-​of-​hearing school (or) go to a hearing school.’ (NGT, Legeland et al. 2018: 61)

Legeland et al. claim that in this case, the word order asymmetry results from focus movement in the second conjunct, as shown in Figure 26.2 (note that they follow Munn (1993), according to whom coordination has a binary structure, called BP, which is headed by a ‘boolean’ element B, the coordinating element). In this particular example, the two conjuncts share a topic (CI) which appears outside the coordination, and the head of the BP is empty. TopP CI

BP

TP pro

B’ T’

T V GO++3a

B VP

FocP

DPi HEARING SCHOOL

FoC’ Foc

DP SH SCHOOL

TP

pro

T’

T VP GO3b tj

Figure 26.2  Asymmetric coordination structure for the NGT example in (12), resulting from focus movement in the second conjunct (Legeland et al. 2018: 64)

Remember from the discussion in Section 26.2.2 that focus in NGT, including contrastive focus, can also be marked in situ (Crasborn & van der Kooij 2013). Legeland et al. (2018) assume that occasionally, in situ focus marking may not be considered strong 600

601

Information structure

enough for establishing the desired contrast across conjuncts, and that fronting of the focused constituent to SpecFocP yields a stronger marking, which may be perceived as being more compatible with the contrastive focus interpretation by some NGT signers. Apparently, in such contexts, the PSC may be violated. Further research is necessary in order to find out whether other sign languages allow for similar asymmetries in coordination.

26.3.3  Question-​answer  pairs In Section 26.2.2, we mentioned that focus in some sign languages can be expressed by cleft-​like constructions; an ASL example has been given in (4), in (13) we provide another one. However, this construction type in ASL, as well as in other sign languages, has been analyzed in different ways, which is also relevant for its information-​structural function.        br

(13)

KIM SEE

STEAL TTY WHO

LEE

‘Kim saw that the one who stole the TTY was Lee.’     (ASL, Wilbur 1996: 218) For ASL, three different syntactic accounts have been put forward, all of which argue against earlier accounts in terms of ‘rhetorical questions’ (Baker-​Shenk 1983). Wilbur (1996) argued that structures like (13) are wh-​clefts, basing her argument, among other things, on non-​manual marking (which is different from wh-​marking) and the fact that the wh-​cleft can be embedded –​as is true for (13). Hoza et al. (1997) claimed that this construction consists of two independent clauses: the question and the answer, showing, for instance, that (parenthetic) material can intervene between the question-​and the answer-​part. More recently, Caponigro & Davidson (2011) proposed an intermediary analysis:  they analyze this construction as a single sentence containing an embedded question (Q-​constituent) and an embedded answer (A-​constituent). A crucial argument that they provide against the wh-​cleft analysis is that the first part of this construction does not have to resemble a wh-​question, as in (4) and (13), but can be a polar question instead. Furthermore, they show that the A-​constituent, which often looks like a constituent smaller than a clause, can (optionally) actually be a full declarative clause. Both these properties are illustrated in (14).             br

(14)

JOHN HAVE MOTORCYCLE, NO (HE NOT HAVE MOTORCYCLE)

‘John doesn’t have a motorcycle.’

(ASL, Caponigro & Davidson 2011: 336)

As for syntax of the construction, Wilbur (1996) assumed that the underlying structure of a cleft like (13) contains a small clause with the focused phrase (LEE) as subject and the wh-​clause (STEAL TTY WHO) as predicate. The subject then moves to SpecIP and the wh-​clause to SpecCP, yielding the order in (13). In contrast, Caponigro & Davidson (2011) propose that the entire question-​answer clause (QAC) is a declarative clause (IP). This IP includes a silent copula (eBE), which takes the Q-​constituent as its subject and the A-​constituent, an IP with possibly elided material, as its complement. The structure in Figure 26.3 accounts for the embedded-​like behavior of both the Q-​and the A-​ constituent, as they are analyzed as the two arguments of the silent copula (note that the copula is always silent in ASL and other sign languages). 601

602

Vadim Kimmelman & Roland Pfau

Figure 26.3  Structure of a question-​answer clause, according to Caponigro & Davidson (2011: 341)

For other sign languages, various analyses have been proposed as well. Branchini (2014) showed that a similar construction in LIS has properties different from ASL and is thus more directly amenable to the wh-​cleft analysis (e.g., unlike ASL, relative clauses in LIS can have wh-​signs as complementizers, which removes one of the arguments against analyzing the Q-​constituent as a relative clause; furthermore, LIS has predicational wh-​ clefts as well), and Hauser (2018) provided similar arguments for French Sign Language. Based on naturalistic corpus data, Kimmelman & Vink (2017) recently demonstrated that the same phenomenon in NGT might actually represent a range of structures with different degrees of clause integration: from a discourse-​level combination of sentences (à la Hoza et al.) to a single-​sentence syntactic construction (à la Wilbur and Caponigro & Davidson), and that the variable properties of QACs are reflective of an ongoing process of grammaticalization. The syntactic status of this construction type is crucial for determining its information-​ structural function. If one believes, following Hoza et al. (1997), that the construction represents a discourse-​level combination of sentences, it clearly cannot express focus vs. background, as both sentences must have their own information structures (i.e., both must at least contain foci –​similar to what we argued for coordination structures above). In Caponigro & Davidson’s (2011) account, information structure does play a role, but they argue that it is not simply focus that is marked with this construction. Rather, they suggest that “[p]‌ragmatically, a QAC instantiates a topic/​comment structure, with the Q-​ constituent expressing the topic as picking out a sub-​question under discussion and the A-​constituent expressing the comment as the answer to that sub-​question” (Caponigro & Davidson 2011: 324). One can also follow Kimmelman & Vink (2017) in analyzing some of the (more integrated) instances of this phenomenon as marking (contrastive) focus and others as marking discourse-​structural relations between sentences. Given the continuing debates around this phenomenon within one language (ASL), as well as the observed variation in its properties across sign languages, more research on this construction type is necessary.

26.3.4  Doubling Doubling has been observed in many sign languages (see Kimmelman (2014:  33) for references), but is also attested in a variety of typologically diverse spoken languages 602

603

Information structure

(Kandybowicz 2007). However, just as with question-​answer pairs, different researchers have proposed different syntactic analyses of doubling in sign languages. Early research on doubling focused on verb doubling in ASL. Fischer & Janis (1990) argued that verbs in ASL are syntactically and morphologically restricted; in particular, they cannot at the same time be marked for aspect and license an object (15a). Consequently, whenever a transitive verb with an overt object requires aspectual modification, doubling is used to rescue the derivation:  the initial copy of the verb is then responsible for licensing the object and the final copy for the realization of aspectual marking, as shown in (15b) (see also Kegl 1985; Liddell 2003). (15)  a. * S-​A-​L-​L-​Y TYPE[asp: unrealized inceptive] T-​E-​R-​M PAPER ‘As Sally is typing her term paper …’ b. SALLY THERE HMM TYPE T-​E-​R-​M PAPER TYPE[asp: unrealized inceptive] ‘As Sally is typing her term paper …’ (ASL, Fischer & Janis 1990: 281, 283) Later research on doubling, in particular in ASL and Brazilian Sign Language (Libras), showed that not only verbs, but also modal verbs (16a), negators, quantifiers, nouns, and wh-​words can be doubled; moreover, doubling of transitive verbs is attested even in the absence of aspectual marking (16b). Researchers suggested that in such cases, doubling has information structure-​related functions, namely focus in general (ASL: Petronio 1993) or emphatic focus (ASL and Libras: Lillo-​Martin & de Quadros 2008; Nunes & de Quadros 2008). Note that in both cases, focus is non-​manually marked by a head nod on the clause-​final instance of the doubled element.  hn

(16)

a.

IX1 CAN GO PARTY CAN

‘I CAN go to the party.’  hn

b.

IX1 LOSE BOOK  LOSE ‘I

LOST the book.’ (Libras, adapted from Nunes & de Quadros 2008: 178, 180)

Kimmelman (2014) argued that in RSL and NGT, doubling is also used for information structure-​related functions, but proposed that the functions of doubling are better described as foregrounding. This type of analysis allows unifying doubling as described in this section and topic-​copying discussed in Section 26.2.1, example (2).5 Petronio (1993) originally suggested that the focused element is base-​generated in the head of a head-​final [+focus] CP. Among other things, this allows her to account for the fact that only heads, but not phrases, can be doubled (also, there can only be one double per clause). Nunes & de Quadros (2008) point out a number of conceptual and empirical problems with Petronio’s account and offer an alternative analysis based on Nunes’ (2004) copy theory of movement. A prerequisite for their account is the process of Chain Reduction, which, triggered by linearization considerations (i.e., Kayne’s (1994) Linear Correspondence Axiom (LCA)), usually deletes all but one copy of a moved element. However, after morphological fusion, an adjoined element may become invisible to the LCA and thus to Chain Reduction. As for the example in (16b), the derivation would then proceed as follows (see Figure 26.4 for illustration): (i) the focused verb LOSE moves

603

604

Vadim Kimmelman & Roland Pfau

to and fuses with the head of an emphatic focus phrase (E-​FocP), which will be overtly realized by head nod; (ii) the TP, including the lower copy of the verb, then moves to the specifier of a higher topic phrase; (iii) when it comes to spell-​out, the final copy will be distinguishable from the other copy due to morphological fusion with the E-​Foc head, and Chain Reduction does not apply, that is, both copies of the verb will be pronounced; the lower TP, however, will be deleted in accordance with the LCA.6

Figure 26.4  Emphatic focus doubling in the Libras example (16b): after adjunction to E-​Foc, both copies of the verb LOSE are pronounced (Nunes & de Quadros 2008: 185)

Bross (2018) briefly addresses focus doubling in DGS, and offers a slightly different account. Discussion of the example in (17) leads him to assume that the focus phrase, the head of which hosts the final double, is right-​headed in DGS; the TP remains in situ, and SpecFocP can be occupied by a contrastive focus. If TP had moved to SpecTopP, as in Figure 26.4, then it should have preceded the contrastively focused pronoun. (17) A: Did Paul buy the beer yesterday? B:

 contr

    foc

INDEX2 SHOULD BEER BUY

SHOULD

‘It was YOU who SHOULD have bought the beer!’ Note that all the researchers mentioned here analyze doubling as a grammatical (syntactic) process, and not as mere repetition. However, the grammatical status of similar constructions in spoken languages has been questioned (Stolz et al. 2011), and therefore, further research is needed.

26.3.5  Buoys and related strategies Another strategy that is claimed to be related to information structure in sign languages is clearly modality-​specific, namely buoys, or more generally, weak hand holds, whereby one hand is being held stationary in the location and configuration of a previously produced sign, while the other is used to produce one or more other signs. The relevant question here is what information is coded by means of this strategy. Liddell (2003:  223), for instance, argued that buoys “help guide the discourse by serving as conceptual landmarks as discourse continues”. He discussed several types of 604

605

Information structure

buoys: (i) the list buoy where a numeral handshape is held on the weak (non-​dominant), and different referents are associated with different fingers; (ii) the theme buoy where a raised index finger is held to represent an important referent; (iii) the pointer buoy where a pointing sign directed towards an important referent is held; and (iv) the fragment buoy where a part (i.e., a fragment) of a two-​handed sign referring to an important concept is maintained in the signing space. The common function of the theme, pointer, and fragment buoys is thus that an important referent is emphasized. One can analyze this as a strategy for discourse topic marking:  an element which is important not just for one sentence, but for an episode, is being highlighted. List buoys are usually used to identify several referents at once (e.g., family members), but their function is generally similar. Another common type of weak hand holds occurs in locative constructions where the weak hand is representing the Ground in a Figure-​Ground locative relation; in the RSL example in (18), this is the opened window. Although this hold is also used to emphasize the Ground and its location with respect to the Figure (the grandmother), it does not seem reasonable to analyze the Ground as a discourse topic, as it is often used sentence-​internally and is not necessarily relevant for a longer episode (h1 = right hand, h2 = left hand). (18)  h1:  h2:

WINDOW.OPEN GRANNY IX WINDOW.OPEN-​-​-​-​-​-​-​-​-​-​-​-​

‘The window opens. The granny is there.’ (RSL, Kimmelman 2019: 94) Brentari & Crossley (2002) also described forward-​ referencing holds which are characterized by the fact that a final sign from one phrase is being held while the signer already starts producing the next phrase (19). In this way, the semantic relation between the two sentences is emphasized. Kimmelman et al. (2016) also discussed similar holds in RSL and NGT. (19) h1: h2:

IX3

DISTRIBUTE-​ALL-​OVER.

IX1

DARN

DISTRIBUTE-​ALL-​OVER-​-​-​-​-​-​-​

‘She distributed (the advertisement) all-​over. (There I was,) “Darn!” ’ (ASL, adapted from Brentari & Crossley 2002: 122) Kimmelman (2017) proposed a unified syntactic analysis for most types of weak hand holds, arguing that they all involve multi-​dominance in syntax (as proposed for spoken languages by, for instance, de Vries (2009)), and that multi-​dominant structures themselves trigger the hold. For instance, in the case of a list buoy, illustrated in (20), the sign THREE.LIST (articulated with a -​hand) is a topic (Frame Setting) constituent shared by the three clauses, and as such, it has several parents (see Figure  26.5). This syntactic construction triggers the activation of the second hand, on which the multi-​dominated constituent THREE.LIST is realized, that is, this constituent is only spelled out once. Note that the three pointing signs point to the thumb (IXa), index finger (IXb), and middle finger (IXc) of the list buoy. (20)

h1:

IXa DAVIDENKO. IXb N-​A-​D-​I-​A. IXc R-​I-​T-​A

605

606

Vadim Kimmelman & Roland Pfau

h2: THREE.LIST-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​-​ ‘Of the three of them, the first one was Davidenko, the second one Nadia, and the third one was Rita.’ (RSL, Kimmelman 2017: 30)

Figure 26.5  Multidominance analysis of a list buoy in example (20) (Kimmelman 2017: 44). FrSet stands for Frame Setting, a topical constituent in the left periphery

Note that not all weak hand holds have information structure-​related functions, or make any semantically or pragmatically relevant contributions at all. Some holds are used for phonetic reasons, and others might be used to mark syntactic and/​or prosodic domains; see Kimmelman (2019) for further details.

26.4  Experimental research Although this chapter is devoted to theoretical approaches to information structure, in this section, we will briefly discuss experimental studies in this domain, which primarily concern the acquisition of topic and focus marking. Chen Pichler (2012), in a handbook chapter on the acquisition of sign languages, also discusses this topic, so the interested reader is advised to consult her chapter and the references mentioned there. In the experimental/​acquisitional studies of sign languages, one can observe a tendency which we have already alluded to in previous sections, namely that it is not always clear what exactly is being studied: the pragmatic notions of topic and focus, or syntactic operations, such as topicalization, which can fulfill different functions. A related incertitude concerns the role of non-​manuals: it is not always clear whether they are analyzed as obligatory topic/​focus markers. Finally, note that most if not all the studies so far are based on very small groups of children or even on a single case.

606

607

Information structure

Acquisition of topics in ASL has been studied by Reilly et al. (1991) and Chen Pichler (2001, 2010). Reilly et al. (1991) considered eyebrow raise to be the topic marker in ASL, and found that this marker did not emerge in their data until 3;0, while eyebrow raise was used to mark yes/​no-​questions already around 1;9. Chen Pichler (2001, 2010), however, noticed that Reilly et al. (1991) had identified examples with fronted objects, but had not analyzed those as topics due to the lack of non-​manuals. Chen Pichler (2010) also reported examples of object fronting without eyebrow raise but with some prosodic marking in the data produced by one of the four children that she studied in the period between 20 and 30  months. For instance, in (21), produced by a 25-​month-​old child, the whole sentence is marked with eyebrow raise because it is a yes/​no-​question, but the topic GRANDMOTHER is prosodically separated from the comment by a hold. Chen Pichler (2010) hypothesized that the syntactic operation of topicalization might in fact be independent of eyebrow raise, and thus that topicalization for pragmatic purposes might actually emerge very early in ASL.7           br

(21)

GRANDMOTHER, WANT

‘Is it grandmother you want?’    

(ASL, Chen Pichler 2010: 171)

Note that from the translation of example (21), it is clear that the function of the fronted constituent GRANDMOTHER is contrastive focus, not topic. Chen Pichler (2010) also acknowledged this fact and compared her findings to those reported by Lillo-​Martin & de Quadros (2005), to which we now turn. Lillo-​Martin & de Quadros (2005) studied the acquisition of focus marking in ASL and Libras. They analyzed longitudinal data from two ASL-​acquiring and two Libras-​ acquiring children, for each child within different periods between 1;1 and 3;0. The purpose was to find out at what age the children would acquire fronting of new information (contrastive focus), and at what age they would acquire emphatic focus doubling (see Section 26.3.4) as well as the realization of sentence-​final emphatic focus, which they analyze as resulting from the same syntactic operation, but different from fronting. It turned out that contrastive focus fronting was acquired significantly earlier by all children than emphatic focus doubling and focus-​final realization, confirming Lillo-​Martin & de Quadros’ (2005) hypothesis that focus doubling and focus-​final realizations are underlyingly the same operation, distinct from fronting. Note again that focus fronting in their data was not accompanied by eyebrow raise. Wood (2013) investigated the perception of topic marking in Libras by native signers, late learners, and homesigners. Her main theoretical question was which parts of the grammar are innate (‘rooted’ in her terminology) and thus would also emerge in homesigners, who do not receive any sign language input. She hypothesized that topic marking would be a feature that is not completely innate and predicted that it would not emerge in homesigners. The participants were presented with sentences with and without topicalization, and their comprehension of these sentences was tested through picture matching. It turned out that late learners of Libras indeed showed high performance on sentences without topicalization and lower performance on sentences with topicalization, demonstrating that topicalization is a syntactic feature that is difficult for late learners. However, homesigners were at chance level for perception of both types of sentences, so the results with respect to topic marking are inconclusive specifically for this group.

607

608

Vadim Kimmelman & Roland Pfau

Finally, in one experimental study, researchers looked at manual prosodic marking of focus in ASL (Gökgöz et al. 2016). In a production study, they found that children acquiring ASL in the age range of 4–​8 years produced contrastively focused signs with longer duration, higher articulation speed, more repetitions, and proximalization in comparison to the same signs in non-​contrastive focus positions. Note, however, that this study did not address the issue of development of such marking, and it is therefore possible that even at the age of 4, children already show adult-​like performance. Taken together, our brief survey reveals that although there are some experimental studies of topicalization, doubling, and prosodic marking of focus, they are mostly small-​ scale, and sometimes inconclusive. It is definitely necessary to conduct further research in this domain to come up with theoretically important conclusions.

26.5  Conclusions In summary, the encoding of information structure in sign language has been investigated from various perspectives and for several sign languages, and based on the available studies, some preliminary conclusion can be drawn. First, sign languages, similar to spoken languages, can mark topics and foci; however, as is also the case in spoken languages, topics and foci are rarely marked unambiguously and by specialized markers. Rather, information-​structural notions are expressed through a complex interplay of syntactic and prosodic markers (including manual and non-​manual prosody). Concerning recent directions in research on information structure in sign languages, several things can be noted. First, sign languages can contribute to research on the relation between prominence and focus, as they follow the general tendency identified for spoken languages –​yet, they also present us with details of focus expression that do not fit the expected pattern. Second, sign languages can make an important contribution to the debate on the status of contrast in relation to focus, as at least some types of contrast have dedicated markers in sign languages that have been studied in this respect. Furthermore, some constructions related to information structure that are common across sign languages are of interest, namely the so-​called wh-​clefts and doubling. Both these construction types also exist in spoken languages, but their syntactic properties as well as their pragmatic functions in sign languages appear to be different. While the bulk of this chapter is devoted to theoretical research, we also briefly discussed the few available experimental or psycholinguistic studies on information structure in sign languages. So far, most such studies have focused on the acquisition of topic and focus marking. Clearly, much more research is needed in this domain. Given the current state of research, several domains can be identified in which more research on information structure in sign languages is expected to lead to new and exciting results. One the one hand, more research on the relation between focus and prominence and focus and contrast (as discussed above) is necessary. On the other hand, doubling and wh-​cleft-​like constructions should be studied for more sign languages, and their syntactic and pragmatic functions should be identified with more precision. Given that information structure is most visible when sentences are analyzed as a part of discourse, and given the recent increase in the development of sign language corpora, much more research in this field should be done using naturalistic corpus data. In addition, there are some promising developments in automatic annotation tools (Karppa et  al. 2012; Puupponen et  al. 2015) that can be applied to manual and non-​manual 608

609

Information structure

prosody, which opens up the possibility of conducting reliable large-​scale studies on information structure. Finally, it is clear that more sign languages must be investigated, also on questions related to information structure. To date, no real typological studies in this domain exist. In addition, most existing descriptions focus on Western urban sign languages, while rural sign languages (de Vos & Pfau 2015) have not been investigated at all. We thus have to keep in mind that the limits of variation that emerge from already existing research might not be representative of the true diversity present in sign languages.

Notes 1 Glossing conventions:  we follow common conventions for glossing sign language examples. Abbreviations: /​  –​ prosodic break, IX –​index (pointing sign); non-​manual markers: br –​brow raise, hf –​head forward, hn –​head nod, top –​topic, foc –​focus, wh –​wh-​question, y/​n –​yes/​ no-​question. 2 Wilbur (2012) also discusses other cleft-​like constructions in ASL. However, unlike the so-​called wh-​clefts that have been described for many sign languages, the other constructions appear to be less common. 3 Puglielli & Frascarelli (2007) suggest that the topic types distinguished by Aarons map onto the three types distinguished by Frascarelli & Hinterhölzl (2007) for spoken languages, i.e., aboutness-​shift, familiar, and contrastive topics, which are arranged in the left periphery in the hierarchical order: ShiftP > ContrP > FocP > FamP* > IP (‘*’ indicates that FamP is recursive). 4 An issue that we leave aside in our discussion of the left periphery is the role of focus in wh-​ questions. It has been argued, at least for ASL and Indian Sign Language (IndSL), that these languages distinguish focused from non-​focused wh-​questions. Neidle (2002) claims that clause-​ final wh-​phrases move through SpecFocP in ASL. Aboh & Pfau (2010) suggest that IndSL does not employ proper wh-​signs but only a general wh-​particle G-​WH occupying the head of the interrogative phrase, which may combine with an associate phrase that moves to SpecFocP (e.g., PLACE GH-​W ‘where’); see Kelepir (Chapter 11) for some discussion. 5 It is worth noting that Crasborn et al. (2012) argued that topic-​copying in NGT is at least partially prosodically motivated in that it is used to create heavy elements in terms of syllable structure in the phrase-​final position. Other instances of doubling might also be partially motivated by similar reasons. 6 Nunes (2004), citing Koopman (1984), provides verb doubling examples from Vata, a Niger-​ Congo language spoken in Ivory Coast, that can be accounted for along similar lines, i.e., by assuming the phonetic realization of multiple copies after morphological reanalysis. 7 Note that Chen Pichler (2010) mentioned ISL, where topics were not consistently marked by eyebrow raise, but were instead marked by a sudden change of several non-​manual markers at the boundary between the topic and the rest of the sentence (Nespor & Sandler 1999); this sudden change can also be seen in her ASL data. Hence, two interpretations are possible: either eyebrow raise on topics is not obligatory in ASL either, contrary to what has been claimed in some previous research, or children acquiring topic marking first acquire the syntactic operation and the change in non-​manuals, and only later the eyebrow raise as a non-​manual specifically associated with topics.

References Aarons, Debra. 1996. Topics and topicalization in American Sign Language. Stellenbosch Papers in Linguistics 30. 65–​106. Aboh, Enoch O. 2004. The morphosyntax of complement-​head sequences. Clause structure and word order patterns in Kwa. Oxford: Oxford University Press. Aboh, Enoch O. & Roland Pfau. 2010. What’s a wh-​word got to do with it? In Paola Benincà & Nicola Munaro (eds.), Mapping the left periphery: The cartography of syntactic structures, Vol. 5, 91–​124. Oxford: Oxford University Press.

609

610

Vadim Kimmelman & Roland Pfau Baker-​Shenk, Charlotte L. 1983. A microanalysis of the nonmanual components of questions in American Sign Language. Berkeley, CA: University of California PhD dissertation. Bos, Heleen. 1995. Pronoun copy in sign language of the Netherlands. In Heleen Bos & Trude Schermer (eds.), Sign language research 1994: Proceedings of the Fourth European Congress on Sign Language Research, 121–​147. Hamburg: Signum. Branchini, Chiara. 2014. On relativization and clefting. An analysis of Italian Sign Language. Berlin: De Gruyter Mouton. Brentari, Diane & Laurinda Crossley. 2002. Prosody on the hands and face. Evidence from American Sign Language. Sign Language & Linguistics 5(2). 105–​130. Bross, Fabian. 2018. The clausal syntax of German Sign Language:  A cartographic approach. Stuttgart: University of Stuttgart PhD dissertation. Calderone, Chiara. 2020. Can you retrieve it? Pragmatic, morpho-​syntactic and prosodic features in sentence topic types in Italian Sign Language (LIS). Venice:  Università Ca’Foscari PhD dissertation. Caponigro, Ivano & Kathryn Davidson. 2011. Ask, and tell as well: Clausal question-​answer pairs in ASL. Natural Language Semantics 19(4). 323–​371. Chen Pichler, Deborah. 2001. Word order variation and acquisition in American Sign Language. Storrs, CT: University of Connecticut PhD dissertation. Chen Pichler, Deborah. 2010. Using early ASL word order to shed light on word order variability in sign language. In Merete Anderssen, Kristine Bentzen, & Marit Westergaard (eds.), Variation in the input: Studies in the acquisition of word order, 157–​177. Dordrecht: Springer. Chen Pichler, Deborah. 2012. Language acquisition. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 647–​686. Berlin: De Gruyter Mouton. Chomsky, Noam. 1981. Lectures on government and binding. The Pisa lectures. Dordrecht: Foris. Chomsky, Noam. 1995. The minimalist program. Cambridge, MA: MIT Press. Coulter, Geoffrey R. 1990. Emphatic stress in ASL. In Susan D. Fischer & Patricia Siple (eds.), Theoretical issues in sign language research: Vol. 1. Linguistics, 109–​126. Chicago: University of Chicago Press. Crasborn, Onno & Els van der Kooij. 2013. The phonology of focus in Sign Language of the Netherlands. Journal of Linguistics 49(3). 515–​565. Crasborn, Onno, Els van der Kooij, & Johan Ros. 2012. On the weight of phrase-​final prosodic words in a sign language. Sign Language & Linguistics 15(1). 11–​38. Crasborn, Onno, Els van der Kooij, Johan Ros, & Helen de Hoop. 2009. Topic agreement in NGT (Sign Language of the Netherlands). The Linguistic Review 26. 355–​370. Davidson, Kathryn & Helen Koulidobrova. 2015. Sentence-​final doubling, negation, and the syntax/​ discourse interface. Presentation at Formal and Experimental Advances in Sign Language Research (FEAST 4), Barcelona, May 4–​6. de Vos, Connie & Roland Pfau. 2015. Sign language typology:  The contribution of rural sign languages. Annual Review of Linguistics 1(1). 265–​288. de Vries, Mark. 2009. On multidominance and linearization. Biolinguistics 3(4). 344–​403. Fischer, Susan & Wynne Janis. 1990. Verb sandwiches in American Sign Language. In Siegmund Prillwitz & Tomas Vollhaber (eds.), Current trends in European sign language research: Proceedings of the 3rd European Congress on Sign Language Research, 279–​294. Hamburg: Signum. Frascarelli, Mara & Roland Hinterhölzl. 2007. Types of topics in German and Italian. In Susanne Winkler & Kerstin Schwabe (eds.), On information structure, meaning, and form, 87–​116. Amsterdam: John Benjamins. Friedman, Lynn A. 1976. The manifestation of subject, object, and topic in the American Sign Language. In Charles N. Li (ed.), Subject and topic, 125–​148. New York: Academic Press. Gökgöz, Kadir, Ksenia Bogomolets, Lyn Tieu, Jeffrey L. Palmer & Diane Lillo-​Martin. 2016. Contrastive focus in children acquiring English and ASL: Cues of prominence. Proceedings of the 6th Conference on Generative Approaches to Language Acquisition North America (Galana 2015), 13–​23 (retrieved from www.lingref.com/​cpp/​galana/​6/​index.html). Goodall, Grant. 1987. Parallel structures in syntax:  Coordination, causatives, and restructuring. Cambridge: Cambridge University Press. Gussenhoven, Carlos. 2004. The phonology of tone and intonation. Cambridge:  Cambridge University Press.

610

611

Information structure Haegeman, Liliane. 2003. Conditional clauses: External and internal syntax. Mind & Language 18. 317–​339. Hauser, Charlotte. 2018. Question-​ answer pairs:  The help of LSF. Formal and Experimental Advances in Sign Language Theory 2. 44–​55. Herrmann, Annika. 2015. The marking of information structure in German Sign Language. Lingua 165. 277–​297. Hoza, Jack, Carol Neidle, Dawn MacLaughlin, Judy Kegl, & Benjamin Bahan. 1997. A unified syntactic account of rhetorical questions in American Sign Language. In Carol Neidle, Dawn MacLaughlin, & Robert G. Lee (eds.), Syntactic structure and discourse function: An examination of two constructions in American Sign Language, 1–​23). Boston: ASLLRP Publications. Ingram, Robert M. 1978. Theme, rheme, topic, and comment in the syntax of American Sign Language. Sign Language Studies 20(1). 193–​218. Janzen, Terry. 2007. The expression of grammatical categories in signed languages. In Elena Pizzuto, Paola Pietrandrea, & Raffaele Simone (eds.), Verbal and signed languages. Comparing structures, constructs and methodologies, 171–​197. Berlin: Mouton de Gruyter. Kandybowicz, Jason. 2007. On fusion and multiple copy spell-​out. The case of verbal repetition. In Norbert Corver & Jairo Nunes (eds.), The copy theory of movement, 119–​150. Amsterdam: John Benjamins. Karppa, Matti, Tommi Jantunen, Ville Viitaniemi, Jorma Laaksonen, Birgitta Burger, & Danny De Weerdt. 2012. Comparing computer vision analysis of signed language video with motion capture recordings. In Proceedings of the 8th Language Resources and Evaluation Conference (LREC 2012). Retrieved from www.lrec-​conf.org/​proceedings/​lrec2012/​pdf/​321_​Paper.pdf Kayne, Richard S. 1994. The antisymmetry of syntax. Cambridge, MA: MIT Press. Kegl, Judy. 1985. Locative relations in American Sign Language: Word formation, syntax, and discourse. Boston: MIT PhD dissertation. Kimmelman, Vadim. 2014. Information structure in Russian Sign Language and Sign Language of the Netherlands. Amsterdam: University of Amsterdam PhD dissertation. Kimmelman, Vadim. 2015. Topics and topic prominence in two sign languages. Journal of Pragmatics 87. 156–​170. Kimmelman, Vadim. 2017. Linearization of weak hand holds in Russian Sign Language. Linguistics in Amsterdam 10(1). 28–​59. Kimmelman, Vadim. 2019. Information structure in sign languages:  Evidence from Russian Sign Language and Sign Language of the Netherlands. Berlin: De Gruyter Mouton. Kimmelman, Vadim & Roland Pfau. 2016. Information structure in sign languages. In Caroline Féry & Shinichiro Ishihara (eds.), The Oxford handbook of Information Structure, 814–​834. Oxford: Oxford University Press. Kimmelman, Vadim, Anna Sáfár, & Onno Crasborn. 2016. Towards a classification of weak hand holds. Open Linguistics 2. 211–​234. Kimmelman, Vadim & Lianne Vink. 2017. Question-​ answer pairs in Sign Language of the Netherlands. Sign Language Studies 17(4). 417–​449. Klomp, Ulrika. 2019. Conditional clauses in Sign Language of the Netherlands: A corpus-​based study. Sign Language Studies 19(3). 309–​347. Krifka, Manfred. 2008. Basic notions of information structure. Acta Linguistica Hungarica 55(3–​ 4), 243–​276. Legeland, Iris, Katharina Hartmann, & Roland Pfau. 2018. Word order asymmetries in NGT coordination: The impact of Information Structure. Formal and Experimental Advances in Sign Language Theory 2. 56–​67. Li, Charles N. & Sandra Thompson. 1976. Subject and topic: A new typology of language. In Charles N. Li & Sandra Thompson (eds.), Subject and topic, 456–​489. New York: Academic Press. Liddell, Scott K. 2003. Grammar, gesture and meaning in American Sign Language. Cambridge: Cambridge University Press. Lillo-​Martin, Diane & Ronice M. de Quadros. 2005. The acquisition of focus constructions in American Sign Language and Lingua de Sinais Brasileira. In Alejna Brugos, Manuella Clark-​ Cotton, & Seungwan Ha (eds.), Proceedings of BUCLD 29, 365–​ 375. Somerville, MA: Cascadilla Press.

611

612

Vadim Kimmelman & Roland Pfau Lillo-​Martin, Diane & Ronice M. de Quadros. 2008. Focus constructions in American Sign Language and Lingua de Sinais Brasileira. In Josep Quer (ed.), Signs of the time. Selected papers from TISLR 8, 161–​176. Hamburg: Signum. Maki, Hideki, Lizanne Kaiser, & Massao Ochi. 1999. Embedded topicalization in English and Japanese. Lingua 109. 1–​14. McIntire, Marina. 1982. Constituent order and location in American Sign Language. Sign Language Studies 37(1). 345–​386. Munn, Alan B. 1993. Topics in the syntax and semantics of coordinate structures. College Park, MD: University of Maryland PhD dissertation. Navarrete-​González, Alexandra. 2019. The notion of focus and its relation to contrast in Catalan Sign Language (LSC). Sensos-​e VI(1). 18–​40. Neidle, Carol. 2002. Language across modalities: ASL focus and question constructions. Linguistic Variation Yearbook 2. 71–​98. Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, & Robert G. Lee. 2000. The syntax of American Sign Language. Functional categories and hierarchical structure. Cambridge, MA: MIT Press. Nespor, Marina & Wendy Sandler. 1999. Prosody in Israeli Sign Language. Language and Speech 42(2). 143–​176. Nunes, Jairo. 2004. Linearization of chains and sideward movement. Cambridge, MA: MIT Press. Nunes, Jairo & Ronice M. de Quadros. 2008. Phonetically realized traces in American Sign Language and Brazilian Sign Language. In Josep Quer (ed.), Signs of the time. Selected papers from TISLR 8, 177–​190. Hamburg: Signum. Petronio, Karen. 1991. A focus position in ASL. In Jonathan D. Bobaljik & Tony Bures (eds.), Papers from the Third Student Conference in Linguistics 1991 (MIT Working Papers in Linguistics 14), 211–​225. Cambridge, MA: MITWPL. Petronio, Karen. 1993. Clause structure in American Sign Language. Seattle:  University of Washington PhD dissertation. Pfau, Roland. 2016. Non-​manuals and tones: A comparative perspective on suprasegmentals and spreading. Revista de Estudos Linguísticos da Universidade do Porto 11. 19–​58. Pfau, Roland & Markus Steinbach. 2006. Modality-​independent and modality-​specific aspects of grammaticalization in sign languages (Linguistics in Potsdam 24). Potsdam: Universitäts-​Verlag (available at: http://​opus.kobv.de/​ubp/​volltexte/​2006/​1088/​). Puglielli, Annarita & Mara Frascarelli. 2007. Interfaces: The relation between structure and output. In Elena Pizzuto, Paola Pietrandrea, & Raffaele Simone (eds.), Verbal and signed languages. Comparing structures, constructs, and methodologies, 133–​167. Berlin: Mouton de Gruyter. Puupponen, Anna, Tuija Wainio, Birgitta Burger, & Tommi Jantunen. 2015. Head movements in Finnish Sign Language on the basis of motion capture data: A study of the form and function of nods, nodding, head thrusts, and head pulls. Sign Language & Linguistics 18(1), 41–​89. Reilly, Judy, Marina McIntire, & Ursula Bellugi. 1991. Baby face: A new perspective on universals in language acquisition, In Patricia Siple & Susan Fischer (eds.), Theoretical issues in sign language research, Volume 2: Psychology, 9–​23. Chicago: University of Chicago Press. Repp, Sophie. 2016. Contrast: dissecting an elusive information-​structural notion and its role in grammar. In Caroline Féry & Shinichiro Ishihara (eds.), The Oxford handbook of information structure, 270–​289. Oxford: Oxford University Press. Rizzi, Luigi. 1997. The fine structure of the left periphery. In Liliane Haegeman (ed.), Elements of grammar. Handbook in generative syntax, 281–​337. Dordrecht: Kluwer. Rizzi, Luigi. 2001. On the position “Int(errogative)” in the left periphery of the clause. In Guglielmo Cinque & Giampaolo Salvi (eds.), Current studies in Italian syntax: Essays offered to Lorenzo Renzi, 287–​296. New York: Elsevier. Sandler, Wendy. 2011. Prosody and syntax in sign languages. Transactions of the Philological Society 108(3). 298–​328. Schlenker, Philippe, Valentina Aristodemo, Ludovic Ducasse, Jonathan Lamberton, & Mirko Santoro. 2016. The unity of focus:  Evidence from sign language (ASL and LSF). Linguistic Inquiry 47(2). 363–​381. Stolz, Thomas, Cornelia Stroh, & Aina Urdze. 2011. Total reduplication. The areal linguistics of a potential universal. Berlin: Akademie-​Verlag.

612

613

Information structure Sze, Felix Y.B. 2008. Topic constructions in Hong Kong Sign Language. Bristol:  University of Bristol PhD dissertation. Sze, Felix Y.B. 2011. Nonmanual markings for topic constructions in Hong Kong Sign Language. Sign Language & Linguistics 14(1). 115–​147. van der Kooij, Els, Onno Crasborn, & Wim Emmerik. 2006. Explaining prosodic body leans in Sign Language of the Netherlands: pragmatics required. Journal of Pragmatics 38(10). 1598–​1614. Wilbur, Ronnie B. 1996. Evidence for the function and structure of wh-​clefts in American Sign Language. In William H. Edmondson & Ronnie B. Wilbur (eds.), International review of sign linguistics, 209–​256. Mahwah, NJ: Lawrence Erlbaum. Wilbur, Ronnie B. 1997. A prosodic/​pragmatic explanation for word order variation in ASL with typological implications. In Marjolijn Verspoor, Kee Dong Lee, & Eve Sweetser (eds.), Lexical and syntactic constructions and the constructions of meaning, 89–​104. Amsterdam: John Benjamins. Wilbur, Ronnie B. 1999. Stress in ASL:  Empirical evidence and linguistic issues. Language and Speech 42(2–​3). 229–​250. Wilbur, Ronnie B. 2012. Information structure. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language. An international handbook, 462–​489. Berlin: De Gruyter Mouton. Wilbur, Ronnie B. & Cynthia G. Patschke. 1998. Body leans and the marking of contrast in American Sign Language. Journal of Pragmatics 30(3). 275–​303. Wilbur, Ronnie B. & Cynthia Patschke. 1999. Syntactic correlates of brow raise in ASL. Sign Language & Linguistics 2(1). 3–​41. Wood, Sandra K. 2013. Degrees of rootedness in acquisition of language:  A look at Universal Grammar in homesigners and late learners of Libras. Storrs, CT:  University of Connecticut PhD dissertation. Zimmermann, Malte & Edgar Onea. 2011. Focus marking and focus interpretation. Lingua 121(11). 1651–​1670. Zorzi, Giorgia. 2018. Gapping vs. VP-​ellipsis in Catalan Sign Language. Formal and Experimental Advances in Sign Language Theory 1. 70–​81.

613

614

27 BIMODAL BILINGUAL GRAMMARS Theoretical and experimental perspectives Caterina Donati

27.1  Introduction Bilingualism, referring to competence in two (or more) languages, is a widespread phenomenon, almost universal across language communities all over the world. Bilingual individuals and their linguistic competence have long attracted the attention of researchers in linguistics and in psycholinguistics. Many fascinating issues can be raised in relation to these individuals, by no means exceptional: how can the same mind host two (or more) grammars that can be very remote from one another? How do these grammars interact, and how is language production able to spring out of two distinct systems, only episodically triggering mixing phenomena? When a bilingual individual speaks one language, what happens to her other language? Is it simply switched off and inactive, or is it rather also active and simply superficially suppressed in order to adapt to our articulatory conditions that only allow one string of sounds to be produced at a time? These and other questions can also be raised, and with even more fascinating implications, in relation to bimodal bilinguals, that is, bilinguals who are competent in a sign language and in a spoken language. While unimodal bilingualism entails a severe production constraint because of the physical impossibility of producing two spoken words or phrases at the same time, bimodal bilinguals have two output channels available:  the vocal tract and the hands. In addition, while for unimodal bilinguals, both languages are perceived by the same sensory system (audition), for bimodal bilinguals one language is perceived auditorily and the other is perceived visually. This might mean that bimodal bilinguals are likely to display production and maybe perception phenomena that will be different from those observed in unimodal bilingualism, and will possibly be directly relevant for understanding the deeper architecture of the bilingual mind. This chapter presents a survey on what we know about bimodal bilinguals, about their linguistic competence and its evolution (Section 27.3), about their monolingual production and their mixed production (Section 27.4), both from a more experimental and psycholinguistic point of view and from a more theoretically oriented perspective. But first of all, some definitions are needed (Section 27.2). 614

615

Bimodal bilingual grammars

27.2  Definitions: bilinguals and bimodal bilinguals Definitions of bilinguals can vary widely, according to how restrictively their boundaries are set (see Grosjean (1998) for a review). On one extreme, we can label as bilingual any individual who possesses some competence in more than one language. This will include near natives and natives, second language learners, third language learners, and maybe even lazy high school students of a foreign language. At the other extreme, we can restrict our definition to individuals that have a native or near-​native competence in more than one language (de Houwer 1995; Müller & Hulk 2001; Müller 2003). In this chapter, we will incline towards this latter definition, which has the advantage of targeting a more homogenous population, whose properties are likely to be more easily investigated as a unitary phenomenon. While bilinguals can become so under many circumstances, the context which seems more likely to give rise to this kind of balanced or almost-​balanced competence is typically that of a simultaneous or very early exposure to both languages (Romaine 1995). This happens typically in mixed families, where each parent speaks their own language to their child, or in migration contexts, where the family keeps their heritage language in addition to the dominant language of the host country. When it comes to bimodal bilinguals, the restrictive definition only applies to one very specific population, that of hearing children of Deaf parents, whose situation of language exposure is very close to that of migration contexts (Polinsky 2018): hearing children of Deaf parents receive spontaneous and early exposure to the heritage sign language within the family, and get to learn the dominant spoken language almost as early from the larger hearing community, from family relatives, at school, and so on. Besides growing up in external circumstances which are similar to those under which early speech bilinguals are raised, these individuals –​commonly designated as Codas (from Children of Deaf Adults) or, in the case of children, Kodas (Kids of Deaf Adults)  –​are known to acquire their sign language and their spoken language along steps that are in many ways parallel to those observed for speech-​unimodal bilinguals (Newport & Meier 1985; Petitto et al. 2001). Some adult Codas may then restrict the use of their sign language to family-​related activities, while many choose to employ their sign language as professional interpreters or in other capacities (Preston 1995).1 It is nevertheless true that Deaf signers are also generally competent in the sign language and the spoken language of the community. However, many Deaf people prefer to read and write the spoken language, rather than to speak it.2 Besides, Deaf individuals do not acquire the spoken language in the same way a second language is acquired by speech-unimodal bilinguals: typically, early speech bilinguals are naturally and spontaneously exposed to their two languages. In contrast, spoken language is not accessible in the environment of a deaf person, and deaf children require special intervention, including long training in speech articulation, speech perception, and lip reading (see Blamey & Sarant 2003). For this reason, we will only include hearing individuals in our definition of bimodal bilinguals. Another population of children with very early exposure to both a sign language and a spoken language has very recently arisen: Deaf children who sign and speak with the aid of a cochlear implant (CI). CIs are medical devices that are surgically implanted inside the ear in order to allow the user to perceive sounds in the range used for speech. CIs are recommended by medical professionals in many countries as the privileged and earliest medical intervention on deaf children as young as 12 months of age. It is standard practice in most countries not to expose children who receive CIs to sign language, in line 615

616

Caterina Donati

with the oral-​only approach that is still the mainstream stance of most medical and educational professionals (see Mellon et al. (2015) for a debate). Some children are exposed to some amount of signing through ‘Total Communication’ educational programs, but the nature of the signing used in such programs is quite variable and poorly controlled. In some cases, implanted children are indeed exposed to a natural sign language quite early  or after initial failure of an oral-​only approach, but very little is known about their  language  development and the amount and quality of the sign language input they are concretely exposed to (see Bouchard et al. (2009) and Peterson et al. (2010) for reviews). In a few cases, CI children have Deaf signing parents, and have thus access to sign language from birth as well as early spoken language exposure after implantation. A small group of American children in these conditions (five individuals) has been studied by Davidson et  al. (2014) focusing on their performance in English. Interestingly, they found that CI children performed as well as the control hearing group (25 Kodas) on a variety of standardized tests. These results have been confirmed since then by other studies showing that children with CI benefit from sign language exposure (Mouvet et al. 2013; Rinaldi & Caselli 2014). This means that signing CI children should definitely be included in our restrictive definition of bimodal bilinguals, and their dual linguistic competence investigated in this framework. For the time being, we shall focus in this chapter on Codas only.

27.3 Development3 Historically, the first attitude of education professionals and scholars towards hearing children whose parents are Deaf signers was that of concern: in the early 1970s, many papers focused on the possibility that children’s competence would be affected by the atypical speech input provided by Deaf parents (see Bull (1998) for an extensive bibliography on Kodas), with some finding delays (e.g., Schiff & Ventry 1976) while others did not (e.g., Schiff 1979). An important and very early study on these issues is Mayberry (1976), who studied the English competence of eight Kodas aged 3–​7 years and found that none exhibited a language deficit on the basis of several measures. This study can be considered the first identifying the situation of these children as that of bilinguals, and putting their observation in the framework of a general interest on bilingualism. It was an exception in those decades, when most scholars, even when emphasizing the role of the ‘sign system’ in influencing children’s development, would not compare it to that of a native language. Things have of course changed radically after Klima & Bellugi (1979) and the decades of sign language research concluding definitively that sign languages are fully-​fledged natural languages.

27.3.1  Separation An important field in the study and understanding of bilingualism concerns its development:  scholars have long wondered how children differentiate the input they are exposed to, and whether this differentiation is a late and painful achievement that children manage after a phase of confusion, or rather an easy and early task that children perform at the onset of their acquisition. The traditional and still commonly present suspicion towards bilingualism rests on the belief that our cognition is really designed

616

617

Bimodal bilingual grammars

for monolingualism, and that bilingualism is thus an exceptional and potentially confusing condition for children. On the scientific side, the main proponents of this view, and notably Volterra & Taeschner (1978) and Taeschner (1983), argue for an initial phase in acquisition where children build a fusional system, trying to accommodate the input data to this unitary hypothesis. Bilingual acquisition would thus be described as the process of separating this initial unique system. Volterra & Taeschner (1978) based their conclusion on basically two phenomena: (i) mixed productions, namely utterances where children appeared to mix the two languages, which were interpreted as a symptom of the fusion and (ii) lexical gaps in children’s initial vocabulary, where certain items would only appear in one language and others in the other language, pointing to the development of a single merged lexicon. This view has long been challenged by subsequent observations (Genesee 1989; de Houwer 1990). Most scholars believe today that children keep the two systems separate from the very onset of their language acquisition. This is clearly shown in particular by the fact that they systematically develop translational equivalents (namely, pairs of lexical items which are synonymous in the two languages) very early on. As for mixed utterances, their interpretation has also radically changed: most researchers argue now that mixed utterances do not reflect fusion or confusion, but demonstrate instead the bilingual child’s distinct representations of their two input languages from an early age. We shall return to mixed utterances in Section 27.4. Interestingly, one of the milestones showing early separation in linguistic systems crucially involved a population of bimodal bilinguals: Petitto et al. (2001) studied the language development over one year of three children acquiring Quebec Sign Language (LSQ) and French and three children acquiring French and English, with ages at onset approximately 1;02, 2;6, and 3;6 per group. They found no significant difference between the two groups, who reached classic early linguistic milestones in each of their languages at about the same time (and similarly to monolinguals), exhibited a lexical rate and growth in each of the languages that was equivalent (and commensurate to what is reported for monolinguals over the same time period), and crucially produced a quite sizable and comparable proportion of translation equivalents (about 40–​50% of the vocabulary entries had a translation equivalent) even at the youngest age (1;02–​1;05). In terms of mixing, the children in the study also mixed their languages to varying degrees, and some persisted in using a language that was not the primary language of the addressee, but the propensity to do both was directly related to the parents’ mixing rates, in combination with their own developing language preference. Mixed utterances, finally, were different in Kodas, since they would tend to speak and sign simultaneously, while unimodal bilinguals would of course alternate the two languages. We shall return to this important observation in the section devoted to mixing (Section 27.4.2). The definitive conclusion is that the capacity of differentiating between the two languages being developed is very precocious and might result from some biological mechanism that is crucially modality-​independent.

27.3.2  Cross-​linguistic influence: the BiBiBi project Early separation of a bilingual’s languages does not mean total independence of the two systems. Bilinguals do exhibit phenomena of cross-​linguistic influences, with their language A influenced by their other language B. The first explanations of these phenomena

617

618

Caterina Donati

relied exclusively on the notion of language dominance, by which the stronger language would influence the weaker one. While it is certainly the case that everyday exposure to a language that is strongly dominant in a community, for example in migration contexts, might influence the heritage or home language of a child, in many cases of balanced bilinguals, the notion of dominance runs the risk of being circular, since the dominant language is identified on the basis of the directionality of cross-​linguistic influence, which in turn is explained on the basis of language dominance (Hulk & Müller 2000). Hulk & Müller (2000) developed an articulated and highly influential proposal on the structural factors constraining such cross-​linguistic influences. They claim that cross-​linguistic influence occurs (i) at the interface; (ii) only if language A has a syntactic construction which may seem to allow more than one syntactic analysis and, at the same time, language B contains evidence for one of these possible analyses. These conditions are posited as necessary but not sufficient for cross-​linguistic influence between any two languages acquired. For example, Paradis & Navarro (2003) and Serratrice et al. (2004) have shown that English-​Spanish and English-​Italian bilinguals, respectively, tend to overuse and overrate subject pronouns in their null subject language, with respect to monolinguals. They produce and accept in particular overt subject pronouns in a topic position that are pragmatically odd in the target languages, but not completely ungrammatical. This phenomenon thus complies both with condition (i) and condition (ii) of the model above. The formulation given above, for which evidence has been gathered over the years for a number of phenomena and a number of language pairs, has been challenged more recently (see Serratrice (2013) for a critical overview). It has been shown in particular that processing factors might play a role as well, leading to non-​target output in the bilinguals’ usage of the language, and not in its knowledge. The idea is that bilinguals are faced with an extra processing load, which will lead them to failing or to avoiding challenging tasks in particular in relation to the interfaces. As for the overuse of overt subjects just cited, it can be explained along these lines if the null category is more costly to process than the overt one. The necessity of an explanation of cross-​linguistic influence that goes beyond pure structural factors is made particularly clear by cases where the observed effect goes in the opposite direction with respect to what is predicted by Hulk & Müller’s hypothesis. An example is the distribution of bare nominals in English-​Italian bilinguals (Serratrice et al. 2009): while English has both bare subject NPs and NPs with a definite article, Italian does not allow for bare subject NPs. The expectation would be that the overlapping structure (the NP with a definite article) would be extended to semantically inappropriate contexts in English (yielding utterances such as *Usually the tomatoes are red). However, this is not what is observed: while the distribution of bare NPs in English was equivalent to that of age-​paired monolinguals, the kids in the study over-​accepted ungrammatical bare nouns in generic contexts in Italian (*Pomodori sono rossi ‘Tomatoes are red’). The authors argued that the directionality of the cross-​linguistic influence is here motivated by economy considerations. Finally, Nicoladis (2006) and Nicoladis et  al. (2010) put cross-​linguistic influences in direct relation with mixing: both would be due to incomplete

618

619

Bimodal bilingual grammars

or imperfect inhibition of the alternative language in production and in processing. We shall return to this issue in the section on blending (Section 27.4.2). What about bimodal bilinguals? Some years ago, Lillo-​ Martin and colleagues (Lillo-​Martin et al. 2010, 2014) launched a large project, called the Binational Bimodal Bilingual (BiBiBi) language acquisition project, to investigate these and other issues concerning the development of bimodal bilinguals (bibibi.uconn.edu). This is a collaborative project aiming at studying bimodal bilingual children in both the United States (acquiring American Sign Language (ASL) and English) and Brazil (acquiring Brazilian Sign Language (Libras) and Brazilian Portuguese). The primary participants are Koda children, but some comparable data are also collected from a small group of American and Brazilian Deaf children with CI. Naturalistic data begins from as young as 1;0 and goes up to 8;06 months. The youngest children are filmed in naturalistic interactions with a parent or a trained research assistant; in sign-​target sessions, the interlocutor is generally a Deaf signer, while in speech-​target sessions, the interlocutor is a hearing person. This allows the researchers to determine the target languages, even if the environment provided is clearly highly bilingual, and thus language mixing occurs quite spontaneously. Experimental data from older children are collected at language ‘fairs’ designed to provide families with fun opportunities (see Quadros et al. 2015). Lillo-​Martin and colleagues provide evidence that 2–​3-​year-​old children show cross-​ linguistic influences in at least two grammatical domains that would be unexpected under Hulk & Müller’s (2000) approach (Lillo-​Martin et al. 2010): word order and wh-​ questions. First, they observe influence on word order, which might not be an interface phenomenon. When speaking American English (AE) or Brazilian Portuguese (BP), the recorded children produce instances of word order reflecting that of the sign language. Examples are given in (1) and (2). (1) OV order (Igor, 2;10) BP: Mãe, Laura cabeça bateu. Target BP: Mãe, a Laura bateu a cabeça. ‘Mom, Laura hit her head.’ 

(Lillo-​Martin et al. 2010: 271)

(2) VS order (Ben 2;03) AE: stuck it Target AE: it’s stuck

(Lillo-​Martin et al. 2010: 271)

Whether this is an interface phenomenon or not depends ultimately on what factors determine word order in (sign) languages, which is an area that is not fully understood yet. What is certain is that here as well the directionality of the cross-​linguistic influence is not the one predicted by Hulk and Müller’s model: children do not opt for an overlapping structure, as they would predict, since the word order they produce is ungrammatical both in English and in Portuguese. Another interesting case are wh-​questions (Lillo-​Martin et  al. 2012; Quadros et  al 2013; see Kelepir, Chapter 11). The researchers find evidence that cross-​linguistic influence occurs both from sign to speech and from speech to sign. In the spoken language, in particular, this influence is visible in the frequent production of wh-​in situ, as in the English examples in (3), and wh-​doubling utterances, as shown for English in (4) and for Brazilian Portuguese in (5). 619

620

Caterina Donati

(3)  a.  Mommy where? b. Buy go where?

(Ben 2;00) (Tom 2;04)

(4) 

(Ben 2;02)

Where balloon where

(5) Que what

eu I

quero want

que? what

(Igor 2;01) (Lillo-​Martin et al. 2012: ­examples 8–​10)

Once again, these phenomena do not match the expectations of Hulk and Müller’s model: children producing (3) and (5) do not opt for an overlapping structure, but rather produce sentences that are plainly ungrammatical in the spoken language. Perhaps the most striking observation from the point of view of cross-​linguistic influence coming out from the BiBiBi project is due to Koulidobrova (2012) and concerns argument omission. Koulidobrova reports that Koda children seem to do the exact opposite of what unimodal bilinguals do, as we just discussed:  Kodas do not overuse and over-​accept overt arguments in the sign language (where they might be pragmatically odd, but not ungrammatical); rather, they overproduce and over-​accept null arguments in English. Adopting the same methodology as Serratrice et al. (2004), Koulidobrova shows that Tom and Lex, two balanced ASL-​English bilingual Kodas, omit in English both subject and object arguments at an age, to a degree, and in contexts that are sharply different from monolinguals. Some examples are given in (6). (6)  a.  CHI:  Mister Conductor said _​_​won’t crashed # he said (Lex 4;05) b. CHI: Can _​_​give me this? (Tom 4;06) (Koulidobrova 2012: 220, 228) This observation is also corroborated by an acceptability judgment experiment (Koulidobrova 2013), which shows that ASL-​English bilingual Kodas (age 6;02–​7;05) over-​accept embedded null subjects as (6a) in English. As for the other language involved in the study, Koulidobrova (2012, 2014a) shows that Kodas do omit subjects in ASL as well, and more so than in English, but they do it to a lesser degree compared to what Italian-​English bilinguals do in Italian (as reported in Serratrice et al. (2004)), and to what adult monolinguals do in ASL (Wulf et al. 2002). An answer to the question whether the rate of omission of Kodas matches what monolingual children of the same age do must wait until comparable data are available for monolinguals.4 What is the source of this difference in the directionality of cross-​linguistic influence? Is this something connected to the nature of bimodal bilingualism and the relation it establishes/​allows between the two languages, or is it rather related to the grammar of null arguments licensing in sign languages? While Koulidobrova opts decidedly for the former option, as we shall see and discuss in the following section, some doubts can be raised. As Koulidobrova (2017) herself has argued convincingly, argument omission in ASL (and possibly in other sign languages) does not display the same properties as argument omission in null subject languages and is supposedly a radically different phenomenon.5 If this is true, the cross-​linguistic influence that we observe in ASL-​English bilinguals with respect to null arguments might not be just peculiar in directionality, but 620

621

Bimodal bilingual grammars

might rather be a structurally different phenomenon, licensed in different contexts, and only superficially comparable to what happens with English-​Spanish or English-​Italian bilinguals.

27.4  Simultaneity and blending As we have seen, it has now been well established that bilinguals have two separate language systems, which might influence one another at the level of their competence. But what happens online? When a bilingual speaks, reads, or listens to one language, what happens to the other language? Is it simply dormant, or is it active and in need of being inhibited? And how do these questions relate to the specificity of bimodality and the availability of two autonomous channels for articulation? These crucial questions, which have put bimodal bilingualism at the center of the debate, are discussed here.

27.4.1  Cross-​language activation: experiments An increasing number of studies show that both languages are active when bilinguals read (Dijkstra 2005), listen (Marian & Spivey 2003), and speak (Kroll et al. 2006) each language. Cross-​language activation has been shown to be effective at various levels of processing. For instance, cross-​linguistic priming effects have been found at both the lexical (e.g., Finkbeiner et al. 2004) and the syntactic (e.g., Loebell & Bock 2003) level. There is now growing consensus that bilinguals do not ‘switch off’ the language that they are not using, even when it would be beneficial to do so. The languages used to study cross-​language activation have been languages with varying degrees of orthographic and phonological overlap, but still compatible with the idea that parallel activation is to be attributed to something closely related to the phonological input/​output: successful understanding of spoken languages involves the activation and retrieval of lexical items that correspond to incoming information. In unimodal bilinguals, auditory or written input might non-​selectively activate lexical items across the two languages, which explains co-​activation. If this were all there is, we could consider cross-​language activation as a superficial phenomenon close to the input. However, crucially, it has been shown that the same effects of cross-​language activation are also systematically observed in bimodal bilinguals, where, of course, no appeal to concrete phonological overlap is possible (Morford et  al. 2011; Ormel et  al. 2012; Shook & Marian 2012; among others). This suggests that parallel activation of lexical items happens at a more abstract level than previously thought, and that it is not modality-​specific. Morford et al. (2011) examined cross-​modal activation during written word recognition in a group of deaf bilingual adults. In a semantic relatedness judgment task, pairs of written English words were presented sequentially on a computer screen, and participants were asked to decide whether or not the words were semantically related. For every word pair, half of the items corresponded to pairs of phonologically related ASL translation equivalents, and half had phonologically unrelated ASL translation equivalents. Morford and colleagues observed that word pairs that were semantically related were judged more quickly when the forms of the ASL translations were also similar, whereas word pairs that were semantically unrelated were judged more slowly when the form of the ASL translations were similar. A  control group of hearing bilinguals without any 621

622

Caterina Donati

knowledge of ASL showed an entirely different behavior. These results were taken to demonstrate what can be called an implicit bimodal priming, that is, readers activate the ASL translations of written words under conditions in which the translation is neither present nor required to perform the task. Similar studies were later replicated with other language pairs, such as German Sign Language (DGS) and German (Kubus et al. 2014). Ormel et al. (2012) investigated the same effect using a different methodology and a different population:  school-​aged Deaf bilingual children acquiring Sign Language of the Netherlands (NGT) and Dutch. They asked participants to complete a picture-​word matching task in Dutch. They found that response time was slowed in the non-​matching condition if the picture had a name in NGT that was phonologically related to the NGT translation equivalent of the Dutch target word. Shook & Marian (2012) observe the same kind of effect in online spoken processing. This was an important step forward because here, contrary to previous studies, the mismatch of the input was complete and genuinely cross modal: an auditory input activating a visual sign. Hearing ASL-​English bilinguals’ and English monolinguals’ eye movements were recorded during a visual word paradigm task, in which participants were instructed, in English, to select an object from a display. In critical trials, the target item appeared with two distractor items and a competitor item whose ASL name overlapped in phonology with the target. Shook and Marian observed (i)  that bimodal bilinguals looked more at competing items than at phonologically unrelated items, and (ii) that they looked more at competing items than monolinguals did, indicating activation of the sign language names during spoken English comprehension. This finding was interpreted as suggesting that language co-​activation is not modality-​specific, neither at the output nor at the input level. Before turning to what is the strongest evidence for cross-​modal activation, namely spontaneous production of simultaneous sign/​ speech (see Section 27.4.2 on code-​ blending), let us briefly focus on a last important group of studies suggesting that cross-​ language activation is something deeper and more abstract than simple input/​output competition. Bilinguals have been reported to show selective advantages in non-​linguistic cognitive control abilities compared to monolinguals, for instance in conflict monitoring, conflict resolution, and task switching (Bialystok et al. 2004, 2008; among others). One reasonable explanation for this advantage is that bilinguals are ‘trained’ by the fact that they must engage general cognitive control mechanisms to manage the cognitive demands of bilingual language processing. In particular, a cognitive demand that bilinguals commonly experience is cross-​linguistic competition during auditory word recognition. Resolving such cross-​linguistic competition has been posited to require cognitive inhibition skills (Green 1998; Shook & Marian 2012). Bimodal bilinguals do not experience within-​modality perceptual competition between their languages. If the experience of this competition and the necessity of its resolution is what explains the cognitive advantages displayed by unimodal bilinguals, we expect bimodal bilinguals not to exhibit these advantages. Karen Emmorey and colleagues have investigated this issue in a series of papers. Emmorey et al. (2008) compared the performance of ASL-​English Coda bilinguals, speech-unimodal bilinguals, and English monolinguals on a conflict resolution task. They found that bimodal bilinguals did not differ from monolinguals, suggesting that bimodal bilinguals do not experience the same advantages in cognitive control as unimodal bilinguals. 622

623

Bimodal bilingual grammars

In a subsequent study, however, Giezen et al. (2015) aimed at investigating possible correlations between cross-​modal activation and non-​linguistic inhibitory control abilities in bimodal bilinguals. In order to test cross-​language activation, they used the same visual paradigm task used by Shook & Marian (2012) in the experiment discussed above. As for testing non-​linguistic control abilities, the investigators prepared a non-​linguistic spatial Stroop task:  a black arrow was presented on the screen, and participants were asked to respond to its direction (either left or right) while ignoring its location on the screen (either left, right, or center). Congruent trials consisted of a leftward arrow on the left side of the screen, or a rightward arrow on the right side of the screen; incongruent trials consisted, of course, of a leftward arrow on the right of the screen, or a rightward arrow on the left side. All participants completed the word recognition task before the spatial Stroop task. Giezen et al. (2015) were able to replicate Shook & Marian (2012)’s results, since they observed that bimodal bilingual participants fixated the cross-​linguistic competitors more than the unrelated distractors, suggesting that they co-​activated the ASL translation of the spoken target words in the experiment. They also observed that participants with smaller Stroop effects, i.e., with better inhibitory control, experienced reduced competition from ASL during early stages of auditory word recognition. This correlation suggests that bimodal bilinguals engage non-​linguistic inhibitory control to resolve competition from ASL during English spoken word recognition. This in turn is the strongest confirmation that perceptual competition is not required for the engagement of inhibitory control in bilingualism. Still, the inhibitory control that bimodal bilinguals need to produce might be weaker than what is necessary for unimodal bilinguals. While the latter have no choice but to suppress one of the languages they are competent in, the former can occasionally set both free. This is what happens when both signs and words are produced, as we shall see in the next section. Going back to the cross-​linguistic phenomena that we discussed in Section 27.3, some scholars have proposed to directly relate cross-​language influence and cross-​ language activation. Nicoladis (2006), for instance, proposes that interference phenomena are due to the imperfect inhibition of a syntactic structure of the inhibited language onto the active one. Koulidobrova (2014b) suggests that the peculiarity of argument omission in bimodal English-​ASL bilinguals (who, remember, omit subject arguments in English) with respect to unimodal Italian-​English bilinguals (who overproduce overt arguments in Italian) might also be due to inhibition and to its different cost:  bimodal bilinguals, for whom the cost is lower, have a reduced processing load compared to unimodal bilinguals, and this allows them to deal more easily with null arguments.

27.4.2  Code-​blending We have seen in the last subsection that the two languages of bilinguals are simultaneously active, with the ‘inhibited’ one implicitly priming and interfering with the ‘overt’ one. The strongest evidence for this conclusion, in addition to the experimental findings discussed above, is the spontaneous behavior of bimodal bilinguals. What characterizes bimodal bilinguals is the availability of two independent articulatory channels. If the two languages are really active, and inhibitory control is costly, then we expect bimodal bilinguals to exploit this extraordinary availability and to produce signs and words 623

624

Caterina Donati

simultaneously, in what has been called code-​blending. And this is indeed what they do. Hearing bimodal bilinguals frequently code-​blend in conversations with other bimodal bilinguals (Emmorey at al. 2008) and sometimes even in conversations with non-​signers (Casey & Emmorey 2009). Even more interestingly, bimodal bilinguals clearly prefer code-​ blending over code-​switching, that is, alternating between speech and sign, which would likely require inhibition of the non-​target language. Emmorey et  al. (2012) compared picture-​naming times for ASL-​English code-​blends compared to English words and ASL signs alone and found that code-​blending did not slow down ASL retrieval, although it slowed down English production because participants synchronized ASL and English articulatory onsets. Furthermore, during language comprehension, code-​ blending facilitated lexical access. Bimodal bilinguals are thus able to simultaneously access lexical signs and spoken items, and seemingly without additional processing costs. Studies carried out on various language pairs –​ English-​ASL (Emmorey et al. 2003, 2005, 2008; Bishop & Hicks 2008; Lillo-​Martin et al. 2014; among others), French-​LSQ (Petitto et  al. 2001), Dutch-​NGT (van den Bogaerde & Baker 2005; Baker & van den Bogaerde 2008), Portuguese-​Libras (Lillo-​Martin & Müller de Quadros 2009), Italian-​ Italian Sign Language (LIS; Donati & Branchini 2013; Branchini & Donati 2016), and Finnish-​Finnish Sign Language (Kanto et  al. 2013) –​ clearly confirm that bimodal bilinguals largely prefer code-​blending over code-​switching. Since bimodal bilinguals do not code-​switch, while unimodal bilinguals do, it is reasonable to assume that code-​blending is what code-​switching looks like when the usual articulatory constraint imposing just one channel is suspended. Many aspects of code-​ blending are interesting and have attracted the attention of investigators, such as what factors trigger language choice and blending in children and adults (see Lillo-​Martin et al. (2014), in addition to the studies just cited), or its sociolinguistic functions (Bishop 2006). We will focus here on their formal aspects, and in particular on the following question:  what is/​are the grammar(s) regulating code-​blending, and how autonomous can the two simultaneous language strings be as far as content, word order, and completeness are concerned?

27.4.2.1  Classifications In Emmorey et  al. (2008), code-​ blending generally involves translation equivalents (~82%), that is, the participants produce a sign roughly equivalent in meaning to the spoken word, as illustrated in (7). (7) Equivalent code-​blend English: I don’t think he would really live ASL: NOT THINK REALLY LIVE

(Emmorey et al. 2008: 48)

Sometimes, non-​equivalent forms of blending are used (~16%). These non-​equivalent cases include basically two types: cases in which speech elements and sign elements are not translation equivalents, as in (8), and cases containing translation equivalents, but where a sign is produced several seconds before its spoken translation equivalent, and (therefore) occurs simultaneously with a non-​equivalent word. We shall use the term ‘non-​alignment’ for the latter case, keeping ‘non-​equivalent’ for the truly non-​equivalent cases, like (8), where ‘Tweety’ and ‘bird’ are not strictly synonymous.

624

625

Bimodal bilingual grammars

(8) Non-​equivalent blend (semantics) English: Tweety has binoculars too ASL: BIRD HAVE BINOCULARS

(Emmorey et al. 2008: 49)

Baker & van den Bogaerde (2008) use a more articulated classification that introduces other useful dimensions of analysis. They distinguish between (i)  speech-​based code-​ blends, where the utterance is mostly spoken with occasional concurrent production of signs that are translation equivalents, as in (9); (ii) sign-​based code-​blends, in which the utterance is primarily signed with some accompanying speech, as in (10); (iii) full code-​ blends, in which the proposition is fully expressed in both modalities, as in (11), and (iv) mixed code-​blends, in which aspects of the utterance are expressed in each modality and both are needed to determine the full meaning of a proposition, as in (12). (9) Speech-​based code-​blend (Mother of Jonas, utterance 105) NGT: VALLEN fall Dutch: die gaat vallen this goes fall ‘This is going to fall.’      (Baker & van den Bogaerde 2008: 7) (10)  Sign-​based code-​blend (Mother of Jonas, utterance 57) NGT: INDEXhe JAS BLAUW he coat blue Dutch: blauw blue ‘He has a blue coat.’    (Baker & van den Bogaerde 2008: 8) (11)

Full code-​blend (Mother of Mark, utterance 60b) NGT: ALLEMAAL  KAN-​NIET all cannot Dutch:  allemaal kan niet all cannot ‘None of us can do (that).’      (Baker & van den Bogaerde 2008: 9)

(12)

Mixed code-​blend (Mother of Jonas, utterance 81) NGT: DAN HARD GENOEG then hard enough Dutch:  dan als genoeg then  when  enough ‘Then when (the fish) is hard, it is enough.’ (Baker & van den Bogaerde 2008: 8)

Van den Bogaerde and Baker also point out that most cases of code-​blending they observe involve structures that simultaneously conform to the grammar of both languages and are thus grammatically congruent. This is in line with the figures presented in Petitto et al. (2001), a study on the bimodal development of LSQ and French in three Koda children. 625

626

Caterina Donati

They report only six instances of incongruent syntax (out of 320 mixes), where speech and sign follow two different word orders, each corresponding to the one prescribed by the respective grammar. One example is given in (13). (13)  Incongruent blend (grammar) LSQ: CHIEN MON dog my French: mon chien my dog ‘my dog’        

(Petitto et al. 2001: 489)

Quadros et al. (2014) similarly report a very small number of incongruent blends: only one out of 59 utterances with more than one sign and one word. Most of the studies quoted so far focus on language pairs where the two languages belong to the same word order typology, such as ASL and English, LSQ and French, or even NGT and Dutch: ASL and English are both head-​initial, LSQ and French as well, and as for NGT and Dutch, they are both canonically SOV. This might explain in part why these studies report so few cases of incongruent blends such as (13). This situation is very different from the one reported in the studies by Donati & Branchini (2013) and Branchini & Donati (2016) on LIS-​Italian bimodal bilinguals aged 8–​11 years. Even if they do not provide any quantitative assessment of the blends they discuss, they find that incongruent blends are robustly attested in their corpus. Italian and LIS are particularly adapted for observing the grammatical constraints at play in blending because the two languages belong to two typological extremes: Italian is a head-​initial language, with auxiliary and negation preceding the verb, and wh-​movement to the left; LIS is equally coherently head-​final, with in particular negation and aspectual markers following the verb, and even wh-​movement to the right. Congruent cases, where the grammatical requirements of the two languages happen to coincide, are consequently predicted to be less frequent –​and they are. In most cases within the fully equivalent type, that is, in cases in which both utterances provide a full utterance containing translation equivalents, Branchini and Donati observe at least three possibilities: (i) the two language strings follow the grammar of the sign language, as in (14); (ii) the two language strings follow the grammar of the spoken language, as in (15); and (iii) each language string follows its own grammar, thus yielding different word orders, as in (16). (14) Speech-​based blend (grammar) Italian: Una bambina va allo zoo A child go.3SG to.the zoo LIS: GIRL GO ZOO ‘A girl goes to the zoo.’        (Branchini & Donati 2016: 11) (15) Sign-based blend (grammar) Italian:  Zio  zia  vero    Roma  abita        uncle  aunt actually Rome  live-3sg LIS:   UNCLE  AUNT  REAL    ROME  LIVE       ‘My uncle and aunt actually live in Rome.’  (Branchini & Donati 2016 : 14)

626

627

Bimodal bilingual grammars

(16) Incongruent blend (grammar) Italian: Non ho not have.1SG

capito understand.PTCP

neg

LIS:

UNDERSTAND

NOT

‘I don’t understand.’    

(Branchini & Donati 2016: 11)

Table  27.1 summarizes the various classifications that have been proposed for code-​ blending in the literature, organized along four dimensions:  completeness, grammar, interpretation, and alignment. Table 27.1  Classifications of code-​blending types COMPLETENESS • independent • dependent GRAMMAR • independent • dependent INTERPRETATION • independent • dependent • mixed

two full utterances -​sign-​based (only signed full utterance) -​speech-​based (only spoken full utterance) -​congruent (same word order in both strings) -​incongruent (different word order) -​sign-​based (sign language dictates word order) -​speech-​based (spoken language dictates word order) equivalent (all translation equivalents) non-​equivalent (not only translation equivalents) both strings contribute to the proposition

ALIGNMENT aligned misaligned

27.4.2.2  Correlations The classifications that we have briefly introduced in the preceding section and summarized in Table 27.1 are obviously quite intricate and present some redundancies. More interestingly, they also appear to correlate in various ways. The types that we label ‘dependent’ in the table, namely the speech-​based and sign-​ based types for completeness and those for grammar, overlap. In both cases, what can be said is that the utterance is fundamentally monolingual, and as such, only complete in one language and only grammatical in that very language, while the blending observed is a simple, very late, and superficial exploitation of the additional channel available, lexicalizing twice all or most items of this abstract monolingual utterance. In other words, these types of blending can be interpreted as cases where lack of inhibition happens very late, at the level that many linguists identify with spell-​out, i.e., at the level at which the abstract structures generated by syntax get linearized and pronounced. A more interesting, but this time negative, correlation has been observed in Branchini & Donati (2016) between the interpretive categories and those of grammar. They show that no blending is possible in cases where there is some amount of non-​equivalence between the two strings as far as content is concerned, and the two strings are either sign-​based or 627

628

Caterina Donati

speech-​based. In other words, non-​equivalent blends are necessarily independent types as far as grammar is concerned. An example of a non-​equivalent blend is given in (17). (17) Italian: LIS: Italian: LIS:

Le the.F.PL

meduse non c’erano /​ non c’erano jellyfish.PL NEG there.be.PST.PL NEG there.be.PST.PL JELLYFISH SEE NOT /​ THERE-​IS-​NOT ‘The jellyfish were not there. They were not there.’ ‘I didn’t see the jellyfish. They were not there.’                       (adapted from Branchini & Donati 2016: 18)

In (17), which contains two adjacent clauses articulated simultaneously in Italian and LIS, a case of non-​equivalence is observed in the first clause:  while the speech string expresses that there were no jellyfish, the LIS string expresses that the signer did not see jellyfish. Interestingly, these two strings display a different word order, in accordance with Italian and LIS grammar, respectively: negation is preverbal in the Italian string and postverbal in the LIS one.6 This in turn suggests that only grammatically independent strings, hence not the late blends just discussed above, may allow for a content mismatch. And this in turn is compatible with the idea that blends like (17), and in general independent blends, stem from a lack of inhibition that is much earlier and more profound than the one discussed above. (17) is compatible with the idea that the two languages are active and project a linguistic representation as early as the first lexical insertion. This would give us basically two very different types of blends, stemming from very different types of lack of inhibition:  (i) the first type, the one behind all the cases of dependent blends, would stem from a late and superficial lack of inhibition, by which the terminal nodes of a monolingual utterance get spelled out twice; (ii) the second type, the one responsible for independent and non-​equivalent blends, would stem from a very early and abstract lack of inhibition, yielding two independent and partially mismatching full-​fledged representations.7 A final interesting correlation that goes in the same direction is observed by Branchini & Donati (2016) and summarized in Table 27.2. Table 27.2  Correlations observed between word order types, morphology, and prosody of the language strings in blending utterances within the corpus Dependent

Independent

one word order one full-​fledged morphological string one intact prosody

two word orders two full-​fledged morphological strings two intact prosodies

What Branchini & Donati (2016) show is that dependent types of blending do not simply display a non-​target word order of an incomplete utterance in one of the two strings; rather, the weak string is also morphologically and prosodically deficient, with lack of functional words, with default morphology, and with non-​fluent prosody. To illustrate with a sign-​based blend, in example (15), repeated here as (18), the Italian string is not only deviant as far as word order is concerned. Also, the preverbal locative Roma is ungrammatical in Italian, the subject nouns lack a determiner and a locative marker, and the verb displays a default third person singular form that does not agree with the plural subject (it should be abitano). 628

629

Bimodal bilingual grammars

(18) Sign-​based blend (grammar) Italian: Zio zia vero Roma abita uncle aunt actually Rome live.3SG LIS: UNCLE AUNT REAL ROME LIVE ‘My uncle and aunt actually live in Rome.’    (Branchini & Donati 2016: 14) This deviant morpho-​syntax goes hand in hand with a disrupted prosody in the same string, which is uttered in a way that has been described in the literature as ‘Deaf voice’, a distinctive feature of bimodals’ vocalizations (Bishop & Hicks 2005), which makes them similar to the vocalizations of deaf people when they produce spoken language. More precisely, Deaf voice includes a pervasive nasalization, a distortion of the prosody towards the extremes of highs and lows, and strong assimilation processes leading to a loss of syllables. Branchini and Donati claim that this marked prosody, and the morphological impoverishment illustrated in (18), are only possible when the blend is sign-​ dominant. In independent blends of the kind of (16) or (17), morphology and prosody are systematically full-​fledged and well-​formed. If confirmed, the correlations summarized in Table  27.2 would be another argument in favor of the idea that blends in bimodal bilinguals can stem from two very different situations of lack of inhibition: one that is very early, and gives rise to two globally independent full-​fledged representations, as in independent blends, and one that is very late and gives rise to one monolingual representation with double lexicalization, as in dependent blends.

27.4.2.3  When does blending occur: the Language Synthesis model and beyond In dependent cases, it is quite clear that there is only one underlying representation, and that double access is a late phenomenon, directly related to the fact that the articulatory constraints are removed for bimodal bilinguals. This accounts for all the cases where one grammar frame seems to be selected, and one proposition. This is also very well accounted for by the Bilingual Language Synthesis model proposed by Lillo-​Martin and colleagues in the context of the BiBiBi project. This model can be schematized as in Figure 27.1.

Figure 27.1  The Language Synthesis model (Lillo-​Martin et al. 2012)

629

630

Caterina Donati

According to the Language Synthesis model, there is always one and only derivation in code-​blending. The computational component, from the selection of roots and functional morphemes until pronunciation, involves a single set of elements, that can nevertheless be selected from the two language systems. Given that the output involves two sets of articulators, and the process of vocabulary insertion can introduce both a signed word and a spoken word at some point, parallel tracks can be formed that will eventually be realized as sign and speech. This model can account for the cross-​language influence phenomena described and discussed in Section 27.3.2, which would simply stem from an initial selection of a mixed set of root morphemes belonging to the two languages (see, for example, Koulidobrova (2012) for a discussion). This model can also account for code-​switching (for which it was originally conceived; cf. Pierantozzi et al. (2007) and Pierantozzi (2012) for a similar proposal) and appears to be very successful in relation to code-​blending. It accounts easily for all the cases where there is only one grammar involved and one equivalent interpretation. It can also account for at least some non-​equivalent cases, such as example (8) above, that can be attributed to an imperfect translation-​equivalent pairing. It can account for misalignments, which can simply be interpreted as stemming from timing discrepancies (see Emmorey et  al. 2008):  misalignments are still consistent with one derivation, knowing that the actual pronunciation of words and signs is much later than the syntactic computation. However, the independent types of blends that we have discussed in the preceding section are more difficult to analyze under this view, since they seem to involve more than just double vocabulary insertion. Donati & Branchini (2013) tried to explain the non-​ congruent cases as being due to a late linearization phenomenon. Cases like (16) would then correspond to a unique abstract representation linearized in two different ways at the moment of spelling out the inserted vocabulary items. This view would be perfectly compatible with the Language Synthesis model, where parallel access is a late phenomenon applying to a unique possibly mixed representation. Yet, such an analysis appears to be challenged by the correlations observed between word order and other morpho-​phonological features of the string and the correlation with mismatch in interpretation, which we discussed in the preceding section. As we have seen, these correlations really seem to call for a dual derivation working in parallel for the two languages from early on. Given what we know about cross-​modal activation (Section 27.4.1), it would be difficult to rule out this possibility, namely, that bilinguals are able to deal with two independent co-​active full-​fledged representations in their two languages. This, on the other hand, opens the way to fascinating developments, calling for a reanalysis of what is known about code-​switching. Remember that code-​blending is code-​switching minus the articulatory constraints. So, if bilinguals can have a double parallel activation not just of two words but of two full-​fledged language structures, code-​ switching could also be seen as the reflex of this, and not just or not only the realization of a language alternation (but see Lillo-​Martin et al. (2016) for an interesting discussion about the same issues drawing different conclusions).

27.5  Conclusion This chapter has explored the fascinating field of bimodal bilingualism, starting from its definition and its first studies (Section 27.2), and focusing in particular on three crucial issues concerning the relation between the two languages in the bilingual mind: language

630

631

Bimodal bilingual grammars

separation and cross-​language influence (Section 27.3); cross-​language activation (Section 27.4.1); language-​or code-​mixing (Section 27.4.2). In these three areas, bimodal bilingualism can be seen as an extraordinary natural experiment, which allows us to see what goes on in the bilingual mind when articulatory constraints are suspended. Research on this is still very young, many facts are still unknown, and many questions are still open, but it is clearly a major source for our understanding of the language faculty, and ultimately of the human mind, which deserves the full attention and the passionate work of linguists and psycholinguists.

Notes 1 Many Codas keep strong ties with their Deaf heritage and their experience of growing up as a hearing child in a Deaf family. An organization was created to bring together Codas, called CODA (www.coda-​international.org). See Bishop (2006) for the history of the formation of this identity. 2 The signing and the writing of Deaf signers have been studied to some extent (Lucas & Valli 1989; Lillo-​Martin 1998; Kuntze 2000; Menéndez 2010), revealing the same kind of interaction effects between the two languages that are typical of bilinguals. Still, the role of the shifted channel (both languages are accessed through the visual channel) remains to be fully understood. 3 See Lillo-​Martin et al. (2016) for a more systematic assessment of the development of bimodal bilingualism and its implications for linguistic theory. 4 It is well known that very young monolinguals of ASL (ages 1;08–​2;10) massively omit subject arguments (see, for example, Quadros et al. 2001), and that the rate of omission tends to drop as they grow older (3;06–​5;09; cf. Lillo-​Martin 1991), but the exact figures that would allow to compare monolinguals with bilinguals of the study are still missing. As pointed out to me by a reviewer (whom I thank), Reynolds (2016) analyzed some of the same bimodal bilingual children as Koulidobrova, but focusing on their ASL narratives elicited when they were older. She found that they did overuse some overt forms, contrary to what Koulidobrova had found in the younger longitudinal data. The different nature of the data observed (narratives vs. conversations) and the different age of the sample might partially explain this puzzling difference. 5 See Cecchetto, Chapter  13 and Lillo-​Martin, Chapter  14, for a review of the nature of null arguments in ASL and other sign languages. 6 Branchini & Donati (2016) integrate the spontaneous data obtained from the observation of Kodas with elicitation and grammaticality judgment data obtained from adult Codas. The latter confirm that an example like (17) would be ungrammatical if negation were realized differently, be it Italian-​based (and thus preverbal in both strings) or LIS-​based (thus postverbal in both strings). 7 Still, something needs to be said on why non-​equivalent blends are relatively rare and, more importantly, why the kind of mismatch observed in their content does never go very far (cf. the example in (8), where the mismatch is akin to specific/​generic: Tweety/​bird; or that in (17), where the mismatch remains within clear boundaries, with one expression (‘didn’t see’) being weaker than the other (‘there wasn’t’) on an epistemic scale). See Emmorey et al. (2008) for an interesting discussion of this point, and on whether the constraint at play is a general cognitive one or something more specific to language, such as Levelt’s (1989) central processing constraints for all natural languages, which impede the production or interpretation of two concurrent propositions.

References Baker, Anne & Beppie van den Bogaerde. 2008. Codemixing in signs and words in input to and output from children. In Carolina Plaza Pust & Esperanza Morales López (eds.), Sign bilingualism: Language development, interaction, and maintenance in sign language contact situations, 1–​27. Amsterdam: John Benjamins. Bialystok, Ellen, Fergus Craik, & Gigi Luk. 2008. Lexical access in bilinguals: Effects of vocabulary size and executive control. Journal of Neurolinguistics 21. 522–​538.

631

632

Caterina Donati Bialystok, Ellen, Fergus Craik, Raymond Klein, & Mythili Viswanathan. 2004. Bilingualism, aging, and cognitive control: Evidence from the Simon task. Psychology and Aging 19(2). 290–​303. Bishop, Michelle & Sherry Hicks. 2005. Orange eyes. Bimodal bilingualism in hearing adults from deaf families. Sign Language Studies 5. 188–​230. Bishop, Michelle. 2006. Bimodal bilingualism in hearing, native users of American Sign Language. Washington, DC: Gallaudet University PhD dissertation. Bishop, Michelle & Sherry Hicks. 2008. Coda talk:  Bimodal discourse among hearing, native signers. In Michelle Bishop & Sherry Hicks (eds.), Hearing, mother father deaf: Hearing people in deaf families, 54–​96. Washington, DC: Gallaudet University Press. Blamey, Peter & Julia Sarant. 2003. Development of spoken language by deaf children. In Mark Marschark & Patricia Spencer (eds.), The Oxford handbook of deaf studies, language, and education, 232–​246. Oxford: Oxford University Press. Bouchard, Marie-​Eve, Christine Ouellet, & Henri Cohen. 2009. Speech development in prelingually deaf children with cochlear implants. Language and Linguistic Compass 3. 1–​18. Branchini, Chiara & Caterina Donati. 2016. Assessing lexicalism through bimodal eyes. Glossa 1(1).  1–​48. Bull, Thomas. 1998. On the edge of Deaf culture:  Hearing children/​Deaf parents. Alexandria, VA: DFR Press. Casey, Shannon & Karen Emmorey. 2009. Co-​speech gesture in bimodal bilinguals. Language and Cognitive Processes 24(2). 290–​312. Davidson, Kathryn, Diane Lillo-​Martin, & Deborah Chen Pichler. 2014. Spoken English language development among native signing children with cochlear implants. Journal of Deaf Studies and Deaf Education 19(2). 238–​250. De Houwer, Annick. 1990. The acquisition of two languages from birth: A case study. Cambridge: Cambridge University Press. De Houwer, Annick. 1995. Bilingual language acquisition. In Peter Fletcher & Brian MacWhinney (eds.), The handbook of child language, 219–​250. Cambridge, MA: Blackwell. Dijkstra, Ton. 2005. Bilingual visual word recognition and lexical access. In Judith Kroll & Annette de Groot (eds.), Handbook of bilingualism:  Psycholinguistic approaches, 179–​201. New York: Oxford University Press. Donati, Caterina & Chiara Branchini. 2013. Challenging linearization: Simultaneous mixing in the production of bimodal bilinguals. In Theresa Biberauer & Ian Roberts (eds.), Challenges to linearization, 93–​128. Berlin: Mouton de Gruyter. Emmorey, Karen, Helsa Borinstein, & Robin Thompson. 2005. Bimodal bilingualism:  Code blending between spoken English and ASL. In James Cohen, Kara McAlister, Kellie Rolstad, & Jeff MacSwan (eds.), Proceedings of the 4th International Symposium on Bilingualism (ISB4), 663–​673. Somerville, MA: Cascadilla Press. Emmorey, Karen, Helsa Borinstein, Robin Thompson, & Tamar Gollan. 2008. Bimodal bilingualism. Language and Cognition 11. 43–​61. Emmorey, Karen, Thomas Grabowski, Stephen McCullough, Hannah Damasio, Laura Ponto, Richard Hichwa, & Ursula Bellugi. 2003. Neural systems underlying lexical retrieval for sign language. Neuropsychologia 41. 85–​95. Emmorey, Karen, Gigi Luk, Jenny Pyers, & Ellen Bialystok. 2008. The source of enhanced cognitive control in bilinguals: Evidence from bimodal bilinguals. Psychological Science 19(2). 1201–​1206. Emmorey, Karen, Jennifer Petrich, & Tamar Gollan. 2012. Bilingual processing of ASL–​English code-​blends: The consequences of accessing two lexical representations simultaneously. Journal of Memory and Language. 67(1). 199–​210. Finkbeiner, Matthew, Karen Forster, Janet Nicol, & Kumiko Nakamur. 2004. The role of polysemy in masked semantic and translation priming. Journal of Memory and Language 51. 1–​22. Genesee, Fred. 1989. Early bilingual development: One language or two? Journal of Child Language 16. 161–​179. Giezen, Marcel, Henrike Blumenfeld, Antony Shook, Viorica Marian, & Karen Emmorey. 2015. Parallel language activation and inhibitory control in bimodal bilinguals. Cognition 141. 9–​25. Green, David. 1998. Mental control of the bilingual lexico-​semantic system. Bilingualism: Language and Cognition 1. 67–​81. Grosjean, François. 1998. Studying bilinguals: Methodological and conceptual issues. Bilingualism: Language and Cognition 1(2). 131–​149.

632

633

Bimodal bilingual grammars Hulk, Aafke & Natasha Müller. 2000. Bilingual first language acquisition at the interface between syntax and pragmatics. Bilingualism: Language and Cognition 3(3). 227–​244. Kanto, Laura, Kerttu Huttunen, & Marja-​Leena Laasko. 2013. Relationship between the linguistic environments and early bilingual language development of hearing children in Deaf-​parented families. Journal of Deaf Studies and Deaf Education 18. 242–​260. Klima, Edward & Ursula Bellugi. 1979. The signs of language. Cambridge, MA:  Harvard University Press. Koulidobrova, Elena. 2012. When the quiet surfaces:  “Transfer” of argument omission in the speech of ASL-​English bilinguals. Storrs, CT: University of Connecticut PhD dissertation. Koulidobrova, Elena. 2013. Influence uninhibited:  Null subjects in the speech of ASL-​English bilinguals. In Stavroula Stavrakaki, Marina Lalioti, & Polyxeni Konstantinopolou (eds.), Advances in language acquisition, 346–​ 355. Newcastle upon Tyne:  Cambridge Scholars Publishing. Koulidobrova, Elena. 2014a. Mechanics of “transfer”: Evidence from the speech of ASL-​English bilinguals. University of Connecticut Working Papers in Linguistics (UconnWPL) 17. Cambridge, MA: MIT Press. Koulidobrova, Elena. 2014b. Null arguments in bimodal bilingualism:  Code-​blending (and the lack of) affects in American Sign Language. BUCLD 38 Proceedings, 253–​265. Somerville, MA: Cascadilla Press. Koulidobrova, Elena. 2017. Elide me bare: Null arguments in American Sign Language. Natural Language and Linguistic Theory 16(3). 491–​539. Kroll, Judith, Susan Bobb, & Zojia Wodniecka. 2006. Language selectivity is the exception, not the rule:  Arguments against a fixed locus of language selection in bilingual speech. Bilingualism: Language and Cognition 9. 119–​135. Kubus, Okan, Agnes Villwock, Jill Morford, & Christian Rathmann. 2014. Word recognition in deaf readers:  Cross language activation in German Sign Language and German. Applied Psycholinguistics 36(4). 831–​854. Kuntze, Marlon. 2000. Codeswitching in ASL and written English language contact. In Karen Emmorey & Harlan Lane (eds.), The signs of language revisited: An anthology in honor of Ursula Bellugi and Edward Klima, 287–​302. Mahwah, NJ: Lawrence Erlbaum. Levelt, Willem. 1989. Speaking: From intention to articulation. Cambridge, MA: MIT Press. Lillo-​Martin, Diane. 1991. Comments on Hyams and Weissenborn:  On licensing and identification. In Jürgen Weissenborn, Helen Goodluck, & Tom Roeper (eds.), Theoretical issues in language acquisition: Continuity and change in development, 301–​308. Hillsdale, NJ: Lawrence Erlbaum. Lillo-​Martin, Diane. 1998. The acquisition of English by Deaf signers:  Is universal grammar involved? In Susan Flynn, Gita Martohardjono, & Warren O Neil (eds.), The Generative study of second language acquisition, 131–​149. Mahwah, NJ: Lawrence Erlbaum. Lillo-​Martin, Diane, Elena Koulidobrova, Ronice Müller de Quadros, & Diane Deborah Chen Pichler. 2012. Bilingual language synthesis: Evidence from wh-​questions in bimodal bilinguals. In Alia Biller, Esther Chung, & Amelia Kimball (eds.), Proceedings of the 36th Annual Boston University Conference on Language Development, 302–​314. Somerville, MA: Cascadilla Press. Lillo-​Martin, Diane & Ronice Müller de Quadros. 2009. Two in one: Evidence for imperatives as the analogue to RI’s from ASL and LSB. In Jane Chandlee, Michelle Franchini, Sandy Lord, & Gudrun-​Marion Rheiner (eds.), Proceedings of the 33rd Annual Boston University Conference on Language Development, 302–​312. Somerville, MA: Cascadilla Press. Lillo-​Martin, Diane, Ronice Müller de Quadros, & Deborah Chen Pichler. 2016. The development of bimodal bilingualism: Implications for linguistic theory. Linguistic Approaches to Bilingualism 6(6). 719–​755. Lillo Martin, Diane, Ronice Müller de Quadros, Deborah Chen Pichler, & Zoe Fieldsteel. 2014. Language choice in bimodal bilingual development. Frontiers in Psychology 5. 153–​167. Lillo-​Martin, Diane, Ronice Müller de Quadros, Elena Koulidobrova, & Deborah Chen Pichler. 2010. Bimodal bilingual cross-​language influence in unexpected domains. In João Costa, Ana Castro, & Maria Lobo (eds.), Language acquisition and development:  Proceedings of GALA 2009, 264–​275. Newcastle upon Tyne: Cambridge Scholars Press. Loebell, Helga & Kathyn Bock. 2003. Structural priming across languages. Linguistics 41(5). 791–​824.

633

634

Caterina Donati Lucas, Ceil & Clayton Valli. 1989. Language contact in the American Deaf community. In Ceil Lucas (ed.), The sociolinguistics of the Deaf community, 11-40. San Diego: Academic Press. Marian, Viorica & Michael Spivey. 2003. Competing activation in bilingual language processing:  Similarity within and across languages. Journal of Psycholinguistic Research 37(3). 141–​170. Mayberry, Rachel. 1976. An assessment of some oral and manual language skills of hearing children of deaf parents. American Annals of the Deaf 121. 507–​512. Mellon, Nancy, John K. Niparko, Christian Rathmann, Gaurav Mathur, Tom Humphries, & Donna Jo Napoli. 2015. Should all deaf children learn sign language? Pediatrics 136(1). 170–​176. Menéndez, Bruno. 2010. Cross-​modal bilingualism:  Language contact as evidence of linguistic transfer in sign bilingual education. International Journal of Bilingual Education and Bilingualism 13(2). 201–​223. Morford, Jill, Erin Wilkinson, Agnes Villwock, Pilar Piñar, & Judith Kroll. 2011. When deaf signers read English: Do written words activate their sign translations? Cognition 118. 286–​292. Mouvet, Kimberley, Liesbeth Matthijs, Gerrit Loots, Miriam Taverniers, & Mieke van Herreweghe. 2013. The language development of a deaf child with a cochlear implant. Language Sciences 35.  59–​79. Müller, Natasha. 2003. Multilingual communication disorders: Exempla et desiderata. Journal of Multilingual Communication Disorders 1(1). 1–​12. Müller, Natasha & Aafke Hulk. 2001. Crosslinguistic influence in bilingual language acquisition: Italian and French as recipient languages. Bilingualism: Language and Cognition 4. 1–​53. Newport, Elisa & Richard Meier. 1985. The acquisition of American Sign Language. In Dan Slobin (ed.), The cross-​linguistic study of language acquisition, 881–​938. Hillsdale, NJ: Lawrence Erlbaum. Nicoladis, Elena. 2006. Crosslinguistic transfer in adjective-​noun strings by preschool bilingual children. Bilingualism: Language and Cognition 9. 15–​32. Nicoladis, Elena, Alyssa Rose, & Cassandra Foursha-​Stevenson. 2010. Thinking for speaking and crosslinguistic transfer in preschool bilingual children. International Journal of Bilingual Education and Bilingualism 13. 345–​370. Ormel, Ellen, Daan Hermans, Harry Knoors, & Ludo Verhoeven. 2012. Cross-​language effects in written word recognition: The case of bilingual deaf children. Bilingualism: Language and Cognition 15(2). 288–​303. Paradis, Johanne & Samuel Navarro. 2003. Subject realization and crosslinguistic interference in the bilingual acquisition of Spanish and English: What is the role of the input? Journal of Child Language 30. 371–​393. Peterson, Nathaniel, David Pisoni, & Richard Miyamoto. 2010. Cochlear implants and spoken language processing abilities: Review and assessment of the literature. Restorative Neurology and Neuroscience 28. 237–​250. Petitto, Laura-​Ann, Marina Katerlos, Bronna Levy, Kristine Gauna, Karine Tetreault, & Vittoria Ferraro. 2001. Bilingual signed and spoken language acquisition from birth: Implications for the mechanisms underlying early bilingual language acquisition. Journal of Child Language 28. 453–​496. Pierantozzi, Cristina. 2012. Agreement within early mixed DP. What mixed agreement can tell us about the bilingual language faculty. In Kurt Braunmüller & Christoph Gabriel (eds.), Multilingual individuals and multilingual societies, 137–​152. Amsterdam: John Benjamins. Pierantozzi, Cristina, Caterina Donati, Laura Bontempi, & Letizia Gasperoni. 2007. The puzzle of mixed agreement in early code mixing. In Adriana Belletti et al. (eds.), Language acquisition and development. Proceedings of GALA, 437–​449. Newcastle upon Tyne: Cambridge Scholars Press. Polinsky, Maria. 2018. Sign languages in the context of heritage language: A new direction in language research. Sign Language Studies 18(3). 412–​428. Preston, Paul. 1995. MOTHER FATHER DEAF: The heritage of difference. Social Science and Medicine 40(11). 1461–​1467. Quadros, Ronice Müller de, Deborah Chen Pichler, & Diane Lillo-​Martin. 2014. Code-​blending in bimodal bilingual development. Paper presented at the International Society for Gesture Studies, San Diego, CA. Quadros, Ronice Müller de, Deborah Chen Pichler, Diane Lillo-​ Martin, Carina Cruz, Viola Kozak, Jeffrey Palmer, Aline Lemos Pizzio, & Wanette Reynolds. 2015. Methods in bimodal

634

635

Bimodal bilingual grammars bilingualism research: experimental studies. In Eleni Orfanidou, Bencie Woll, & Gary Morgan (eds.), Research methods in sign language studies, 250–​280. Malden, MA: Wiley-​Blackwell. Quadros, Ronice Müller de, Diane Lillo-​Martin, & Deborah Chen Pichler. 2013. Early effects of bilingualism on WH-​question structures:  Insight from sign speech bilingualism. In Stavroula Stavraki, Marina Lalioti, & Polyxeni Kostantinopoulou (eds.), Proceedings of GALA 2011, 300–​ 308. Newcastle upon Tyne: Cambridge Scholars Press. Quadros, Ronice Müller de, Diane Lillo-​Martin, & Deborah Chen Pichler. 2016. Bimodal bilingualism: sign language and spoken language. In Mark Marschark & Patricia Spencer (eds.), The Oxford handbook of Deaf studies in language, 181–​196. Oxford: Oxford University Press. Quadros, Ronice Müller de, Diane Lillo-​Martin, & Gaurav Mathur. 2001. O que a aquisição da linguagem em crianças surdas tem a dizer sobre o estágio de infinitivos opcionais? Letras de Hoje:  Estudos e Debates de Assuntos de Lingüística, Literatura e Língua Portuguesa 36(3). 391–​397. Reynolds, Wanette. 2016. Early bimodal bilingual development of ASL narrative referent cohesion:  Using a heritage language framework. Washington, DC:  Gallaudet University PhD dissertation. Rinaldi, Pasquale & Cristina Caselli. 2014. Language development in a bimodal bilingual child with cochlear implant: A longitudinal study. Bilingualism: Language and Cognition 17(4). 798–​809. Romaine, Suzanne. 1995. Bilingualism. Malden, MA: Wiley-​Blackwell. Schiff, Naomi. 1979. The influence of deviant maternal input on the development of language during the preschool years. Journal of Speech and Hearing Research 22(3). 581–​603. Schiff, Naomi & Ira Ventry. 1976. Communicating problems in hearing children of deaf parents. Journal of Speech and Hearing Research 41(3). 348–​358. Serratrice, Ludovica. 2013. Cross-​linguistic influence in bilingual development: Determinants and mechanisms. Linguistic Approaches to Bilingualism 3(1). 3–​25. Serratrice, Ludovica, Antonella Sorace, Francesca Filiaci, & Michela Baldo. 2009. Bilingual children’s sensitivity to specificity ad genericity:  Evidence from metalinguistic awareness. Bilingualism: Language and Cognition 12. 239–​257. Serratrice, Ludovica, Antonella Sorace, & Sandra Paoli. 2004. Crosslinguistic influence at the syntax-​pragmatics interface: Subjects and objects in English-​Italian bilingual and monolingual acquisition. Bilingualism: Language and Cognition 7(3). 183–​205. Shook, Anthony & Viorica Marian. 2012. Bimodal bilinguals co-​activate both languages during spoken comprehension. Cognition 124. 314–​324. Taeschner, Traute. 1983. The sun is feminine: A study on language acquisition in bilingual children. Heidelberg: Springer. Van den Bogaerde, Beppie & Anne Baker. 2005. Code-​mixing in mother-​child interaction in deaf families. Sign Language & Linguistics 8(1/​2). 151–​174. Volterra, Virginia & Traute Taeschner. 1978. The acquisition and development of language by bilingual children. Journal of Child Language 5. 311–​326. Wulf, Alyssia, Paul Dudis, Robert Bayley, & Ceil Lucas. 2002. Variable subject presence in ASL narratives. Sign language Studies 3(1). 54–​76.

635

636

28 LANGUAGE EMERGENCE Theoretical and empirical perspectives Annemarie Kocab & Ann Senghas

28.1  Introduction What is the source of the intricate patterned structure of natural human language? Evidence from emerging communication systems points to three pieces of the puzzle: 1) child learners, 2) interacting within a social community, 3) that passes the system down from one generation to another. Researchers have documented the structure of a variety of emerging systems, in the natural world and in the laboratory, that capture the effects of each of these critical ingredients. A newly emerging sign language in Nicaragua shows how the combination of all three factors –​ children, community, and intergenerational transmission –​yield the genesis of a new language. This complex system began in the hands and minds of children. Prior to the 1970s, deaf people in Nicaragua had little to no contact with each other. There were very few schools and clinics available to them, and no social centers, and consequently, no commonly used sign language. Then shifts in the surrounding socio-​cultural environment drastically changed the lives of deaf people in Nicaragua. The changes began with the opening of a small classroom for deaf children in 1974. The program soon expanded into a new, National Center for Special Education in 1977 that admitted 50 deaf students, many of whom graduated into a new vocational program for adolescents in 1981 (Polich 2005). Initially, the teachers relied on oralist methods, conducting school lessons in spoken Spanish. These efforts were met with limited success, but the teachers noticed that students were communicating on the playground and buses using gestures. While the teachers were not able to understand these gestures, it was clear that the children could. What they were all witnessing was the birth of a new language: Nicaraguan Sign Language (NSL). Today, Nicaragua has a rich sign language and a vibrant deaf community with over 1,500 members. NSL is a natural language, like any other language in the world, used to express thoughts, desires, and beliefs. It is used to give directions, relate dreams about the future, admonish, and tell jokes. Utterances are produced and understood using a shared vocabulary and a shared set of linguistic rules. How did this complex structure emerge from the early gestures used on the playground by children who had never had full access to a mature language? 636

637

Language emergence

This unique phenomenon offers insight into the larger question of how humans are able to create and acquire language, a question that has fascinated scientists and philosophers dating back to Descartes. Decades of work on language acquisition and cross-​cultural studies have shown that every child, barring extreme circumstances, acquires the language of the environment in a relatively short period of time (e.g., Gleitman & Newport 1995). For humans, acquiring language comes as naturally as any instinct (Pinker 1994). In this chapter, we discuss several accounts of language emergence drawn from different sets of empirical evidence, including ‘natural experiment’ cases of pidgins and creoles, homesign, and emerging sign languages, as well as laboratory paradigms in which people learn artificially constructed ‘languages’, or produce spontaneous gestures to communicate without speaking. Natural experiments refer to those cases in which the variability or ‘manipulation’ of the factors of interest has occurred in the world outside the control of researchers, allowing them to observe and compare the effects of different environmental conditions, such as language creation by an individual in isolation (homesign) or the emergence of a new sign language in a community. By studying children who create language anew while they acquire it (e.g., Sankoff & Laberge 1973), we can better understand what is innate, what is learned, and how languages are shaped over time. In addition to these comparatively rare instances of natural experiments, researchers have sought to better understand our natural capacity for language by creating artificial learning conditions in the laboratory, observing which kinds of patterns people can (and cannot) learn, and examining the patterns within the communication codes and gestures they create. By studying language creation in the laboratory, we can vary relevant properties, including the content of the input, and the number of generations of learners, to see their effects on the structures that emerge. Language creation and language emergence encompass a variety of phylogenetic and ontogenetic events. In this chapter, we focus on the question of how language emerges and becomes complex, drawing on a variety of evidence, linked by the common thread of the role of the child. We set aside discussion of the conditions that gave rise to the first human languages and the evolutionary path leading to modern languages. We ground our discussion of language emergence in language acquisition, because theories of how language is built and survives must consider how structure is created or learned by its users, and passed down from generation to generation. By focusing our attention on the intricate link between the process of language acquisition and the process of language emergence, we will be able to understand both more clearly.

28.2  Theoretical accounts Broadly speaking, there are two general classes of explanation for human language (see Kocab et al. (2016) for a discussion). The first posits that language is a product of the structure of the human mind. Each individual mind, either owing to our genetic endowment (e.g., Chomsky 1968, 2000; Pinker 1994) or due to evolutionary changes in our computational abilities (e.g., Christiansen & Chater 2008), is capable of creating and acquiring language. The second posits that language is a side effect of our human capacity for social learning. On this account, language developed over historical time, as a product of cultural transmission (e.g., Tomasello 2008). Owing to the lack of direct evidence (the Linguistic Society of Paris famously banned discussion of the origins of language in 1866), these arguments reflect different views

637

638

Annemarie Kocab & Ann Senghas

of plausible learning mechanisms and possible evolutionary developments. Accordingly, they differ in the degree of innate structure proposed. What is there to be learned in a language (and what is not) will determine the equipment needed in order to learn it –​that is, the learning mechanisms. With the Cognitive Revolution in the 1950s came a shift away from behaviorism and a growing appreciation of the importance of studying mental processes. The study of how children acquire language became a central way to better understand the human mind. Theories of how children acquire language must specify the starting state, the innate primitives, if any, that the child is equipped with. In the following section, we discuss three theories of language acquisition and their respective accounts of language emergence.

28.2.1  Structure from the mind/​biology Variations in environmental input, such as those found in the natural experiments we describe in this chapter, have implications for questions about the degree of innateness of language learning mechanisms. Chomsky’s influential poverty of the stimulus argument formed a cornerstone of a broad argument for the innateness of the language faculty. The poverty of the stimulus argument can be divided into two distinguishable parts: the first is an argument against radical empiricism, or the postulate that everything can be learned through experience (James 1890). The second concerns the input needed for successful learning. Below, we discuss the first version, summarizing the logical argument that nativists have made. We do not address the second version, an empirical question that many continue to pursue (e.g., Pinker 1979; Marcus 1993; Chouinard & Clark 2003). The first part of the poverty of the stimulus argument posits that every learner is isolated from much of the information that would be required to learn language from input alone (e.g., Chomsky 1959; Pinker 1979).1 The logic of this argument is that while every child seems to learn the language of the environment, acquiring approximately the language used by adult speakers, the ‘correct’ grammar to be learned is in fact one of an indefinite number of alternatives, all consistent with what has been made available in the input. In other words, the hypothesis space of possible grammars is unconstrained. Accordingly, a child can never take the absence of a sentence in the input as evidence that the sentence is not part of the language. A learner who had no innate, language-​specific knowledge or biases would never encounter the data needed to choose from the set of possible languages (see Gold 1967). However, we observe that children all over the world do arrive at the correct grammars for their languages. Therefore, children must not be unconstrained learners; that is, they must have innate biases that direct their language learning.2 The question, then, is precisely what innate machinery the child needs to acquire language. Some theorists have proposed extensive innate syntax (e.g., Chomsky 1965, 1972, 1995; Pinker 1984, 1989). For instance, the semantic bootstrapping theory posits that the structure of semantic roles (like agent, patient, action, object, and so on), and of syntactic elements (like subject, object, noun, verb, and aspects of phrase structure rules) are innate, along with rules for linking syntax to semantics (Pinker 1984, 1989). What the child must learn when acquiring language are the exact parameter settings of the language, for instance, whether verb phrases in the target language are V + NP or NP + V. How does a child come to have all this innate structure? Inborn, inherited traits must be the result of evolutionary processes (e.g., Pinker 1994). Children are the beneficiaries of thousands of years of natural selection. Language, with its complex structure and 638

639

Language emergence

apparent design, and its corresponding language faculty, evolved gradually through natural selection. Over evolutionary time, those individuals who could better perceive, parse, and organize communicative information from others, had reproductive advantages. As a result of natural selection, everyone’s brains eventually had these skills at the ready (Pinker & Bloom 1990; Pinker 1994). Over generations, proto-​ languages and their grammars became increasingly complex. Through this evolutionary process, modern humans have come to have language-​ready brains that favor the acquisition of particular patterns and structures. In cases where children do not share a language with others, but possess modern human brains, connected by a social drive to communicate, a rich, structured language should emerge relatively quickly.

28.2.2  Structure from cultural processes Critics of the semantic bootstrapping proposal object that the story is evolutionarily implausible, because it requires a simultaneous evolution of syntactic categories, underspecified phrase structure rules, and the linking rules (e.g., Bates 1984; Tomasello 1995). A second criticism is that the proposed innate rules fail to account for the diversity observed in the world’s languages (Baker 2003; Evans & Levinson 2009). Thus, opponents proposed quite different theories. For instance, the Verb Island Hypothesis rejects the view of the child as having productive language, and proposes a much less linguistically endowed child, one who does not come equipped with innate syntactic categories or rules (Tomasello 1992, 2000). Rather than being built into the child through evolution, syntactic categories are gradually acquired by the child over years of early development. According to these proposals, it is the child’s rich capacity for social, rather than specifically linguistic learning, that leads to the emergence of structured language over generations of transmission. On this view, language, like science and mathematics, is a product of cumulative cultural evolution, where each generation builds on the work of the preceding generation, resulting in a gradual ‘ratcheting-​up’ of complexity (e.g., Tomasello et al. 1993; Tomasello 1999). Here, the evolution of shared intentionality then allowed for human language to evolve, through the coordination of complex collaborative activities such as foraging (Tomasello 2008). The structure of language arises through this process of iterated learning over generations (e.g., Tomasello 1999; Kirby et al. 2008). Under this account, any creation of language in an individual, or a group of individuals, should be slow and protracted, requiring generations of users and time.

28.2.3  Structure from acquisition processes Between these two extremes are alternatives that draw on both the structure of individual minds, particularly in childhood, and features of the transmission process. Languages may begin to emerge in a generation and develop through social processes over decades rather than millennia. Work from artificial language learning has demonstrated that some structure can emerge over a few generations of transmission in the laboratory (e.g., Kirby et al. 2008) and some evidence from the study of pidgins and creoles suggests that children may play a role in driving the development of the structures found in human languages (e.g., Bickerton 1981, 1984; Romaine 1988; Newport 1990; Pinker 1994; Gleitman & Newport 1995). The structure of language may emerge through acquisition processes where, as children learn a language, they actively filter and rebuild it (e.g., Senghas & Coppola 2001; Senghas et al. 2004; Singleton & Newport 2004; Hudson Kam 639

640

Annemarie Kocab & Ann Senghas

& Newport 2009). Accounts of this type predict that the emergence of a new language is driven by children, and that when generations of users are in an unbroken line of contact with each other, an emerging system can undergo rapid change within decades. Note that this account allows for the possibility that different aspects of the language emerge on different time scales, as different parts of the language may arise from different learning mechanisms and sources.

28.3  Experimental evidence The three types of approaches summarized above differ with respect to the role that children play in the process of language emergence. Some propose that language is a product of the predispositions of child learners. Others propose that language emerges through socio-​cultural processes, beyond the content of any individual mind, similar to the construction of mathematics and science over historical time. Still others suggest that language emergence is the outcome of the interaction of multiple children’s minds, within a peer community and over a few generations of transmission. In the sections below, we consider the evidence for children’s role in language emergence. We introduce the use of artificial learning paradigms with children and adults (Section 28.3.1), gestural language creation (Section 28.3.5), and transmission chain experiments (Section 28.3.6). We also consider evidence from natural experiments, including pidgins and creoles (Section 28.3.2), the case of Simon (Section 28.3.3), homesign (Section 28.3.4), and, in more detail, consider case studies of two phenomena from emerging sign languages: the emergence of word order (Section 28.3.7) and spatial agreement (Section 28.3.8).

28.3.1  Child learners and adult learners are different: evidence from artificial language learning in the laboratory As any adult who has ever encountered an unfamiliar language can attest, it is exceedingly difficult to identify the starting and ending boundaries of words, or how to change tense from present to past, how to ask a question, and so on. How do children, with their far more limited memory and processing skills, learn a language seemingly effortlessly, and how might their particular skills shape a new language? Work with artificial language learning has pinpointed interesting differences between children and adults in their approaches to learning language-​like patterns. In artificial language learning paradigms, participants are exposed to a patterned miniature language system designed to follow specific rules set by the experimenter. At the end of the learning phase, participants are tested on their implicit knowledge of the rules and patterns of the language, through tasks like sentence completion and grammaticality judgments. Hudson Kam & Newport (2009), using such a paradigm, experimentally varied the unpredictability, or ‘noise’ in the target language, manipulating the selection and frequency of different determiners with different nouns. The authors found that when learning an artificial language in which grammatical forms occur probabilistically (that is, in which rules are followed some proportion of the time), adults usually probability-​ match, reproducing the probabilities in their own productions, except in cases of extreme noise, in which case they would ‘regularize’ by choosing the most frequent determiner form. In contrast, children would usually regularize, even when the noise was not extreme, and interestingly, there was variation in the way that they regularized. In other words, children did not always regularize by using the most frequent determiner. Some children 640

641

Language emergence

regularized by omitting all determiners, or by using determiners with nouns in transitive but not intransitive sentences. In addition, children seemed to be imposing what appeared to be linguistic rules, based on, for example, distinctions between transitive and intransitive sentences, or subjects and objects. In short, children will regularize and systematize noisy input, whereas adults will reproduce the inconsistencies in their input, and regularize only when the noise is great (Hudson Kam & Newport 2009). An explanation offered by Hudson Kam & Newport (2009) for why children regularize more than adults is the limitations of children’s cognitive abilities. This is referred to as the “Less is More” hypothesis (Newport 1988, 1990). For instance, children’s relatively poorer memory may prevent them from being able to retrieve some forms, particularly if they occur with low frequency or are inconsistently used, making children more likely than adults to overproduce certain forms, leading to regularization. Adults are better able to store a larger range of forms, and consequently may be able to reproduce a greater variety, leading to the observed pattern. Such artificial language studies with children have demonstrated how children approach the task of acquiring language differently than adults, imposing structural regularities different from the ones that generated their input. Crucially, the children in this experiment seemingly changed inconsistent noisy input into structured output following grammatically based rules. Such biases in child learners may well equip them to reshape language as they acquire it.

28.3.2  From a pidgin to a creole through language acquisition processes Cases of language formation in communities are seen in pidgins and creoles. While linguists disagree about the exact definitions of pidgins and creoles, pidgin generally refers to language systems created by adults who need to communicate but share no common language when they come together or are brought together for a purpose, such as trade or work on plantations (Holm 1988). In pidgins, most of the lexical forms are borrowed from one of the contact languages, often the language of the socially-​dominant group (sometimes called the superstrate), consisting mainly of nouns and verbs and lacking function words, and there is little if any grammatical morphology (e.g., DeCamp 1971; Hymes 1971; Sankoff 1979; Koopman & Lefebvre 1981; Mühlhäusler 1986). Over time, as the pidgin becomes more frequently used, and the community stabilizes, a creole will emerge. Creoles exhibit more structural complexity than pidgins, including properties such as tense marking, word order, and so on (Hall 1962, 1966; Koopman & Lefebvre 1981; Bickerton 1984). There are a range of proposals for the factors responsible for the change of a pidgin into a creole (see, e.g., Hall 1966; cf. DeGraff 1999; McWhorter 2001). Some have argued that creoles emerge when a pidgin is acquired by native users, exposed to the language from birth rather than in adulthood (e.g., Bickerton 1981, 1984; Romaine 1988; Adone & Vainikka 1999; Bruyn et al. 1999). In his influential Language Bioprogram Hypothesis (LBH), inspired by Chomsky’s nativist account of language knowledge (Bickerton 1981, 1984), argued that the structure present in creoles is a direct product of children’s Language Acquisition Device (LAD), as children exposed to a pidgin reorganize the impoverished input to create a natural language. For example, Bickerton (1977, 1984) observed in his interviews with speakers of Hawaiian pidgin and creole that while the oldest adults used pidgin, the following generation, exposed to them as children, used creole. Bickerton hypothesized that adults could not systematize irregular input because they had passed 641

642

Annemarie Kocab & Ann Senghas

their critical period for language acquisition (Bickerton 1981). Further, Bickerton claimed that the different creoles appear to share structural similarities, exhibiting, for instance, consistent word order and tense-​aspect-​mood marking, and proposed that the seemingly consistent grammatical patterns across different creoles (cf. Muysken 1981; Singler 1990; Bruyn & Veenstra 1993) came from the default values of the LAD. The reorganization of a pidgin into a creole was taken as evidence for an innate language faculty (Bickerton 1980, 1981; Pinker 1994). Hotly controversial, Bickerton’s LBH spurred a great deal of work and debate. In its strictest form it has largely fallen out of favor within the pidgin/​creole literature on the basis of several critiques. One concerns whether creoles really do exhibit a systematic or prototypical syntax as Bickerton proposed (e.g., Mufwene & Dijkhoff 1988; Arends 1989; Kouwenberg 1990; DeGraff 1992; Arends et al. 1995; Veenstra 1996). Others have questioned the claim that creoles arise when children acquire the pidgin natively (e.g., Hymes 1971; Singler 1992; Aitchison 1996). Such alternative accounts argue instead that the expansion of a pidgins into a creole is due to the pidgin’s becoming the language of primary use within a community. These alternative accounts vary in the role of other factors, such as when and how source languages influence the creole (e.g., Lumsden 1989; Lefebvre 2001). Still others have argued that both children and adults drive creolization (e.g., DeGraff 1999). Owing to the limitations of historical data, it is difficult to test these accounts empirically. Indeed, our general ability to make inferences from the data from pidgins and creoles is limited. Three issues are most relevant to our discussion. First, child creole users were often exposed to their parents’ language at home. This dual exposure allows for the possibility that structures observed in the creole were borrowed, as many have argued, rather than stemming directly from the child. Second, the observed similarities across creoles may be due to similarities in the contact languages rather than arising from shared structures in human minds. For instance, while many creoles use SVO order, so do many of the parent languages (e.g., English, French, and Portuguese). Third, the history of the language must be inferred from the current production of the older adult speakers (e.g., comparing 90-​year-​old pidgin speakers to 70-​year-​old creole speakers), leaving unanswered questions about the timescale of language emergence. It is not known what input the creole learners received in childhood, whether creolization occurred within a single generation of children (indeed, some have argued that the creolization process in Hawai′i occurred in two phases across two generations (Reinecke 1969; Roberts 2002)), or if the older generation acquired changes introduced by subsequent generations. Among these competing accounts for the source of increased complexity observed in creoles relative to pidgins, the discussion of the role of children’s acquisition processes remains central in the debate around the origins of language and its structure (see DeGraff 1999). Current cases of children creating and modifying communication systems in isolation shed additional light on the possible answers to these questions.

28.3.3  When the output surpasses the input: evidence for child learning mechanisms Further evidence of how children are able to turn noisy input into an organized output comes from the case of Simon (Singleton & Newport 2004), in what has been described as “creolization by a single living child” (Pinker 1994: 39). Simon was a deaf child born to deaf parents who did not learn American Sign Language (ASL) until late adolescence, 642

643

Language emergence

and whose language consequently lacked the fluency of early-​exposed signers. Simon attended a school with other deaf children, but none were reported to use ASL, leaving his parents as his only source of ASL input. Looking at verbal morphology, Singleton & Newport (2004) compared Simon’s performance on different grammatical tests with that of his parents and eight native signers, deaf children born to deaf signing parents. His parents’ performance was also compared to that of adult signers, including native signers and other late learners. Singleton & Newport (2004) report that Simon’s parents used morphological markers inconsistently, scoring similarly to other late learners of ASL, and below native signers. Simon, in contrast, outperformed his parents, scoring similarly to native signing children on most morphemes (though not all). Thus, Simon’s language was more grammatical and structured than that of his parents, even though they served as his sole input to ASL. Clearly, Simon’s output surpassed his input, though his performance was not quite equal to that of a native signer. Moreover, Simon’s performance was better on the categories on which his parents scored highest (motion and location, where the correct morpheme was produced by his parents 60–​80% of the time). Where Simon did not achieve near native-​like performance were the forms that his parents produced with the least consistency (handshape, where the correct form was produced by his parents around 45% of the time). Singleton & Newport (2004) describe Simon’s pattern as one of frequency boosting, where he used his parents’ most consistent form even more consistently. When the input was more consistent (motion and location), Simon was able to extract and regularize those forms. When the input was less consistent, Simon did not show as strong a pattern of frequency boosting and systematizing, but analysis of his errors revealed some systematicity compared to his parents’, suggesting an emerging organization. These results highlight the creative power that results from the combination of two key conditions: a child learner, and generational transmission. Even when exposed to a noisy or incomplete language model, a child learner can still sometimes extract enough of a signal to recreate a structured pattern. Note that Simon did not faithfully acquire his parents’ rendition of ASL. Indeed, no child reproduces their parents’ language exactly. Evidently, what children add to the equation is not the ability to accurately imitate. Rather, it is an ability to extract a signal, likely aided by a preference for the particular kinds of patterns evident in natural languages. Whether characterized as an innate language faculty (e.g., Pinker 1994) or a more domain-​general learning mechanism that enables the child to clean up noisy input (e.g., Hudson Kam & Newport 2009), it exerts its reorganizational power at the time that language is passed down, from one generation to another.

28.3.4  Language creation by child isolates: the case of homesign Creolization and the case of Simon are examples of language learning when the input is noisy or incomplete, but the input is still linguistic content, that is, it contains forms that are used and shared by a mature language community. Mature languages are highly structured and learnable, a product of multiple generations of learners who have shaped the language (Christiansen & Chater 2009). As such, when children are exposed to mature languages, their output is highly similar to the input. In other words, the language children speak and sign is practically indistinguishable from the language of their parents and other adults in their community. We can learn a great deal from cases with a more 643

644

Annemarie Kocab & Ann Senghas

measurable gap, where the input to the child is not the product of multiple generations of learners. These are cases where children receive little input, or input that is not conventionally linguistic, where the forms are generally not used consistently in a wider communicative context. One such area of evidence comes from homesign systems, gestural communication systems created by deaf children who are not exposed to a manual or visual language that would be accessible to them, nor can they access the surrounding spoken language. Work over the last few decades has shown that homesign systems exhibit some key properties of languages, what Goldin-​Meadow (2003) refers to as “resilient” properties. Homesigners are consistent in the gestural forms they produce, revealing stable lexicons (Goldin-​Meadow et al. 1995). Indeed, the authors describe an incident when a young homesigner corrected his little sister when she used an ‘incorrect’ gestural form to describe an object. The concepts that are lexicalized in homesign are very similar to the concepts that are lexicalized in natural languages, referring to objects like ‘grape’, and actions like ‘eat’ and ‘fly’. The individual gestures also exhibit regularities that approximate something like morphological structure (akin to how words are formed from morphemes). Goldin-​Meadow et al. (1995) demonstrate that American homesigners use consistent form-​meaning mappings for handshape and motion morphemes, for instance, using the same handshape for objects that share common properties, such as similar length or size. This structure appears to be generated by the child homesigners. When the researchers analyzed the mothers’ gestures, they found that they did not conform to the morphological system in the child’s gestures, omitting many of the handshape and motion elements. Homesign systems exhibit combinatorial structure at both the word and sentence level. Homesigners do not produce individual lexical items in isolation, but rather string them into sequences, analogous to sentences. Examining the productions of homesigners in America and China, Goldin-​Meadow & Mylander (1983) noted that, similar to what was observed at the word level, homesigners’ sentences were more complex, containing more gestures within a sequence than their mothers’. The multi-​gestural sequences produced by the children also exhibited some regularity in gesture order, where gestures for intransitive actors and patients were produced before gestures for actions, following roughly SV or OV orders. These regularities were not observed in the mothers’ gestures (Goldin-​Meadow & Mylander 1998). Word order regularities were also observed in the productions of adult homesigners in Nicaragua. These adults had developed and used their individual homesign systems as their primary language since childhood, and had never acquired a mature sign language. Their homesign sentences included subject-​like arguments (as agents and non-​ agent subjects), in a common structural position at the beginning of clauses (Coppola & Newport 2005). In addition, homesigners’ parents and siblings were generally unable to correctly interpret the information encoded by the structure of the homesigners’ productions, further evidence that the structure originates with the homesigners and not their family members (Carrigan & Coppola 2017). The evidence from cases of homesign reveals the kind of structured communication system a single child mind can create. This work suggests that homesign systems are not shaped and expanded by parental input, nor by the language of the surrounding community. Rather, the structure that is present comes from the child. Because homesign systems are not passed down to a new generation, they may lack the opportunity to restructure and reorganize the various devices of the system into a fully integrated, coherent 644

645

Language emergence

language. Even so, individual homesigners are able to build pockets of regularity and linguistic structure across the utterances they produce.

28.3.5  Structure from human cognition: gestural language creation in the laboratory Other work looking at language creation in the laboratory with non-​signing participants leverages the manual modality as a mode of communication, and the language-​like status gestures can hold in certain contexts. In these experiments, hearing, non-​signing participants create gestural communication systems to describe different events without using speech. Participants create a range of gestures, using the hands to represent entities (such as a closed fist to represent a ball) and using the body to represent the attributes of, or actions taken by, an animate entity in the event (like holding up two half-​closed fists to one’s eyes to indicate a person wearing glasses, or putting up two hands and acting out pushing to represent a pushing event). One advantage of using gestural language paradigms is that participants evidently set aside their native spoken language biases when producing gestural descriptions. In the early work on gestural language creation, Goldin-​Meadow et  al. (2008) showed participants events with two entities, the majority of which were non-​reversible, meaning only one of the arguments could serve as the agent, and the other as the patient, as in a captain swinging a pail, or a girl kicking a ball. The agent was typically animate (e.g., captain, man, girl) and the patient inanimate (e.g., ball, guitar, pail) or if animate (e.g., baby, dog), less plausibly able to carry out the action, like swinging, petting, or kicking. The authors observed that the order of gestures within the participants’ descriptions frequently did not follow the dominant or default word order of their native languages, and other researchers later expanded this finding to users of additional languages. Across participants from language groups with different dominant word orders (Chinese, English, Italian, Japanese, Korean, Spanish, and Turkish), participants robustly produce SOV, SV, and OV orders in their gestured descriptions of non-​reversible events, as in girl ball kick (Goldin-​Meadow et al. 2008; Langus & Nespor 2010; Gibson et al. 2013; Hall et al. 2013). This finding of a strong bias for SOV order has drawn the interest of many cognitive scientists. The unmarked order of words in a sentence (constituent order) is a basic organizational principle of language. In many languages, constituent order reliably indicates argument roles. Most of the world’s languages are reported to have a dominant or default constituent order (1188 out of 1377 in WALS, Dryer (2013)). For the three major constituents of sentences, Subject (S), Object (O), and Verb (V), there are six possible word orders: SOV, SVO, OSV, OVS, VSO, VOS. However, far from all six orders being roughly evenly represented, subject-​initial word orders, namely SOV and SVO, dominate in the world’s languages (48% and 41% of the 1188, respectively).3 The prevalence of subject-​initial word orders is often argued to stem from our conceptualization of entities and events (Halliday 1967; Bever 1970; MacWhinney 1977; Osgood 1980; Prince 1981; Dowty 1991). One proposal is that speakers prefer to mention animate referents first in a sentence, and prefer to place animate arguments in the subject role (MacWhinney 1977). Both preferences are satisfied if the subject appears first. The appearance of an animate referent at the beginning of the sentence allows the speaker to describe events from the perspective of the agent, which most closely maps onto the speaker’s own experience of the world (MacWhinney 1977). While different accounts 645

646

Annemarie Kocab & Ann Senghas

focus on different conceptual distinctions (e.g., animate arguments first, causal agents first, etc.), they all argue that subject-​initial word orders predominate due to the structure of human thought and communication preferences, rather than appealing to structural linguistic explanations. We return to our discussion of the seeming bias for SOV and whether we observe such a preference in cases of natural language emergence in Section 28.3.7.3. While these gestural paradigms do not capture the creation of a full language, they may reveal one piece of how languages are created, and also point to potential effects of language modality on language structure. Findings suggest that some aspects of structure in language may stem from the structure of human cognition and communicative biases.

28.3.6  Intergenerational transmission introduces structure Thus far, we have focused our discussion of how language emerges in cases of language creation within a single learner or generation. We do not know how those systems might have continued to develop had they been passed down to a new child or group of children. All mature languages have been passed down from generation to generation. How might generational transmission create or alter the structure of language? Does the transmission process itself leave its own mark? Experimental evidence from laboratory paradigms using diffusion chains with adults suggests that the process of reiterated learning and transmission is a necessary ingredient for language creation, particularly the creation of increasingly patterned structures. In diffusion chain experiments, participants are exposed to a target behavior produced by the experimenter and attempt to replicate that behavior for a second participant. The second participant tries to replicate the behavior for a third participant, who tries to replicate the behavior for a fourth participant, and so on. Each iteration is called a ‘generation’. In one such experiment, Kirby et al. (2008) exposed adult participants to an artificial ‘language’ and observed the changes that emerged as it was passed down over multiple generations. Participants learned a set of words, presented as written labels for colored objects in motion (e.g., a red square moving along a straight path; a blue circle bouncing). The authors found that over generations, the language became more structured (as measured by the minimal number of differences between related meanings) and learnable (as measured by the number of differences between the output produced by a learner in one generation and the output of a learner in the next generation). In other words, over generations, words took on a structure, such that words for red objects, or for circles, or for a bouncing motion, shared commonalities. In this case, the structure is evident in the patterns of word formation within a set of lexical items. However, Kirby et al. (2008) noticed that the language also became less specified and more ambiguous over generations. Words were being reused by participants to express increasingly broader meanings. Ultimately, this underspecification generated an easily transmittable and learnable language, namely one that contained fewer and fewer words, but was highly ambiguous. To address this, the authors filtered the input such that only one meaning of any homophonous word in one participant’s output was in the input for the next participant. The artificial languages emerging in these cases exhibited compositional structure, where words for items that shared similarities (color, shape, or motion) shared the same syllable. This, and similar experiments, nicely demonstrate a role for reiterated learning in the emergence of structure over generations. At the moment or moments of learning or 646

647

Language emergence

relearning from the input, the learner has an opportunity to reorganize what is available to generate a system that is more systematic, or re​ordered according to a new pattern. In these experiments, the structures that emerged, such as a lexicon of multi-​syllabic words in which one syllable represents the shape, another the motion, and another the color, do not necessarily resemble those seen in natural languages. The particular structures are a product of the context of the experiment, in this case, adult learners faced with generating naming conventions for a highly constrained set of items. Even so, of interest to us is the opportunity for reorganization and restructuring that occurs at each generational transition. Another aspect to note is that in Kirby et al. (2008), a structured language did not emerge in every transmission chain, and in chains where structure did emerge, it was not always faithfully transmitted to later generations. In contrast, children generally do not fail to successfully learn a natural language. While intergenerational transmission is necessary for natural language emergence, not every case of such transmission (such as what we observe in the laboratory) yields natural language-​like structure. Given the difference in the nature of the task and the learners, the systems that emerge in laboratory experiments do not look like the systems of natural languages, nor would we expect them to. Indeed, one reason why the laboratory-​generated systems may fail to resemble those seen in natural languages is precisely because the learners are not children. Languages survive only if they are transmitted to the next generation, and in nature, this process always occurs during childhood. Thus, a complete understanding of language emergence relies on understanding what children can acquire during this process of transmission. Nonetheless, this work and others like it, clearly demonstrate a role for reiterated learning in the transmission and structuring of language. We turn next to the study of two sign languages that recently emerged in communities with multiple generations of users.

28.3.7  Emerging sign languages: NSL and ABSL Recent evidence for the contribution of multiple generations of (child) learners comes from the study of emerging sign languages, particularly Nicaraguan Sign Language (NSL) and Al-​Sayyid Bedouin Sign Language (ABSL). NSL is an example of what are called deaf community sign languages, which arise when unrelated signers gather together in one place, typically at a school. Many European and North American sign languages exhibit similar historical origins (Woll et al. 2001). In contrast, village sign languages develop in communities with a high incidence of hereditary deafness. ABSL is an example of one such language, emerging in the last 75 years in an isolated community that was founded around 200 years ago in the Negev region of Israel. In 2005, the Al-​Sayyid community was reported to be in their seventh generation with approximately 3,500 members (Sandler et  al. 2005). Owing to high rates of consanguineous marriage, the incidence of hereditary deafness has increased over the last three generations, and the community now has around 150 deaf individuals. In this close society with large, extended family households, ABSL is used by those deaf individuals as well as many hearing relatives. Deaf members are fully immersed in rich social contact in this community, and the sign language is considered by many to be one of the village’s primary languages (Sandler et al. 2005). In the following sections, we consider a selection of phenomena in the study of word order and spatial agreement in NSL and ABSL. Some of these areas have been 647

648

Annemarie Kocab & Ann Senghas

extensively researched, and others are still in the early stages of study. In each case, we consider the role played by social conditions under which new, structured human language can emerge. These conditions include the presence of child learners, the presence of an extended peer community, and the opportunity for intergenerational transmission.

28.3.7.1  Word order in ABSL Sandler et  al. (2005) elicited spontaneous narratives and descriptions of short video clips from eight second-​generation ABSL signers. They observed that the majority of the utterances (138 of 156) were verb-​final, and this pattern was present in most of the signers (7 of 8). In sentences that included the Subject, Object, and Verb, SOV was the most common word order observed. The authors also counted SV and OV sentences as following the SOV pattern. Interestingly, they found no evidence of OSV in their data. A consistent word order that differed from the surrounding sign and spoken languages4 appears to have emerged early in ABSL. The authors note that the youngest signer of the eight exhibited a pattern different from that of the other seven signers, with a slight preference for orders that were not verb-​final (8 of 15 sentences). The few non-​verb final orders they observed were SVO, VO, and two orders with indirect objects (IO):  SV-​IO-​O and S-​O-​V-​IO. Because there was no systematic comparison of the language of the second and third generations, it is unclear whether the youngest signer’s slight shift away from a predominantly SOV order reflects a change in the language or noise in the data. In addition, because none of the first-​generation ABSL signers is still living (Sandler et al. (2005) notes that there is sparse data, aside from reports that they did sign), we do not know whether a consistent word order was evident even in the very first group of signers, or if the second generation introduced any changes or regularities in word order patterning. Nonetheless, it would seem that a consistent word order emerged early, appearing by the second generation of signers in ABSL.

28.3.7.2  Word order in NSL In the first investigation of word order in NSL, Senghas et al. (1997) elicited intransitive, transitive, and ditransitive sentences from four first-​cohort signers and four second-​ cohort signers. Here we consider the transitive sentences, that is, descriptions of events containing two entities, in order to examine how word order might be used to differentiate subjects and objects. The events were divided into two classes, those with one animate argument and one inanimate argument (irreversible) and those with two animate arguments (reversible). In contrast to what was observed in ABSL, Senghas et al. (1997) found no clearly dominant word order preference in NSL across signers, but observed some differences in the word orders used across the two cohorts. For irreversible events, signers from both cohorts produced sentences containing one or both arguments and one verb, N-​V or N-​N-​V (e.g., girl tear or tortilla girl tear). Looking at the ordering of the two arguments, SOV and OSV were used with approximately equal frequency. However, for the reversible events, a different set of word order patterns was observed, none of which consisted of only one verb and two nouns. Instead, for the events with two animate arguments, first and second-​cohort signers both produced sentences containing two verbs, one for each argument. In the sentences produced by the first cohort, each argument was followed immediately by a verb: N-​V-​N-​V (e.g., man 648

649

Language emergence tap woman get-​tapped).

Signers from the second cohort also produced two verbs for events with two arguments, but differed in their word order, often producing sentences with the verbs adjacent to each other, N-​N-​V-​V or N-​V-​V (e.g., man woman tap get-​ tapped or man tap get-​tapped). Following up on Senghas et  al. (1997)’s work, Flaherty (2014) investigated word order in the first three age cohorts of signers, as well as Nicaraguan homesigners. Across homesigners and signers from all three cohorts, the majority of the utterances containing at least one verb and one noun were verb-​final (802 out of 806). Of the verb-​final sentences with two nouns (190 out of 802), 61% were produced with SOV order. OSV was also frequent, appearing in a little less than half of the sentences (around 43%). SOV was more frequently used for irreversible events (with an inanimate patient) than for reversible events (with an animate patient) (67% vs. 57%). However, Flaherty (2014) observed very few instances of SVO order when the event was reversible, observing more frequent use of OSV instead. While the differences in word order patterning between Senghas et al.’s (1997) investigation and Flaherty’s (2014) results are suggestive of change in the language over the last two decades, Flaherty still reports a great deal of individual variation in word order preference. Many signers did not exhibit a strong preference for one ordering of S and O over another. Evidently, NSL has not yet converged on a single dominant word order, with both SOV[V]‌and OSV[V] being frequently used. The most apt characterization of the language is that it exhibits a strong verb-​final pattern.

28.3.7.3  Word order in gestural language creation As discussed in Section 28.3.5, typological data as well as findings from gestural language creation studies have been taken by some as evidence for SOV as the default word order for human language. If we take SOV to be a cognitive default, we might ask why SVO occurs as often as it does in the world’s languages. Findings from gestural language paradigms suggest an answer. In the early gestural studies (e.g., Goldin-​Meadow et al. 2008), participants were asked to describe events involving two entities, the majority of which were non-​reversible (e.g., a captain swinging a pail). More recent work has revealed that for reversible events that include two referents that are both plausible agents, gesturers shift away from the use of SOV order. For example, when describing a kicking event involving two animate entities both capable of kicking (e.g., a boy and a girl), participants were far less likely to use SOV order, producing SVO instead (Gibson et al. 2013). Hall et al. (2013) observed that SVO was not the only word order that participants used when describing reversible events, also producing other orders like OSV, SOSV, and so on. Two explanations were offered for this switch away from SOV when describing reversible events: the noisy channel account (Gibson et al. 2013) and the role conflict account (Hall et al. 2013). The noisy channel account proposes that the reversibility of the event, which makes it ambiguous and potentially confusable, leads to a shift away from SOV to SVO, where the Verb separates the Subject and Object. The separation of the Subject and Object by the Verb is argued to allow the listener to more easily recover the meaning of the sentence in the case of information loss. With SOV order (BOY GIRL KICK), a listener who misses the beginning of the sentence, would get GIRL KICK, which could be SV, with the girl as agent, or OV, with the girl as patient. But with SVO order, a listener who misses the information at the beginning would still get KICK GIRL, and knowing the word order, correctly interpret this to be VO, with the girl as patient. 649

650

Annemarie Kocab & Ann Senghas

The role conflict account posits that modality effects constrain preferred orders for reversible events. The potential for embodiment of two animate entities in the reversible events pushes participants towards word orders where the verb does not immediately follow the object (as in SOV order). Having the verb follow the object creates a role conflict for the gesturer, who would have to take on the role of the patient right before describing the action. For example, to use SOV order in describing an event in which a boy pushes a girl, the gesturer would first take on the role of the boy (the agent), and then the girl (the patient), and then enact pushing (the action); that is, producing an action embodied from the perspective of an agent right after embodying the role of the patient. To avoid this role conflict, Hall et al. (2013) propose that gesturers switch to orders where the subject immediately precedes the verb, which include SVO, OSV, SOSV, and others. Kocab et  al. (2017) set out to disambiguate between these two competing hypotheses by looking at reversible events with two inanimate entities, such as a truck hitting a car. They found that when describing reversible events involving two inanimate entities, gesturers shift away from SOV in favor of SVO and OSV. This finding suggests that the avoidance of SOV order when describing events involving an animate agent and patient is not driven by the reversibility of the event. However, the increase in the use of OSV in the inanimate-​inanimate reversible trials is also not easily explained by the role conflict account, which makes no predictions about sentences with inanimate objects, nor about why OSV sequences should be greater for inanimate-​inanimate events than for animate-​ animate events. One possible explanation for the increase of OSV and the drop in the use of orders with O before V with two inanimate entities concerns the manual nature of gestural descriptions. While the subject appears before the action in SVO, this order may be dispreferred because its use would require the gesturer to move the hand representing the agent (S)  through space to represent the action (V)  without having described the patient (O). This explanation is a modality-​specific one. Crucially, this finding suggests that preferences for certain word orders may emerge not because learners fall back on some default order, but in response to other pressures such as modality, affecting how languages shift and settle.

28.3.7.4  Discussion: is one word order the default? Recall that there is an unequal distribution of the six possible word orders, with subject-​ initial word orders comprising close to 90% of the world’s languages. Different proposals (see Section 28.3.5) have been made for the prevalence of subject-​initial orders in general but leave open the question of why one subject-​initial order, SOV, is more frequent than the other, SVO. Work from gestural language creation in the laboratory suggests that while this preference for SOV seems robust, it can be shifted depending on the events to be communicated and the modality of the communication. This bias for SOV appears robust in one emerging sign language, ABSL, and less clearly dominant in another, NSL. There are three additional sets of evidence that are often cited in support of SOV as a default order. First, languages that start out SOV have been observed to diachronically change to SVO, but not the reverse (Gell-​Mann & Ruhlen 2011). Second, homesign systems appear to share this bias for SOV, where utterances with two arguments often follow SV or OV orders, and rarely VS or VO orders (Goldin-​Meadow & Mylander 1998; Goldin-​Meadow 2003). Additionally, many sign languages show a strong tendency

650

651

Language emergence

to use and accept SOV orders. In a survey of 42 sign languages, SOV order was found to be permissible in all of them, while SVO was permissible only in some (Napoli & Sutton-​Spence  2014). Given that SVO is almost as frequent as SOV in the world’s languages, this typological situation could be construed as one where the two orders are similarly prevalent and are equally compatible with the cognitive and communicative constraints on language. However, this possibility has not been widely discussed in the literature (see Kocab et al. 2017). In addition, work from language acquisition suggests that children are well equipped to acquire a variety of word orders. Children master the constituent order of their native language quite early in childhood (e.g., Chen Pichler 2001), using this information to determine the roles in a sentence. Children, as drivers of language complexity, can and do introduce changes to language, particularly when there is a shared preference or learning bias. If SOV is a cognitive default, we might expect that over time, as a language is passed from generations of child learners to the next, other factors aside, languages should shift back to SOV. Gestural language creation experiments offer us insight into a fraction of the possible pressures underlying language change. These paradigms generally do not involve children, in a community of learners, with opportunity for generational transmission, and the complexity of language is not well captured in these laboratory communication systems. For instance, languages use strategies other than word order, such as case marking, to convey grammatical information, and languages that have rich case systems typically have freer word order. Interestingly, the historical patterns from spoken languages suggest that changes in word order from SOV to SVO begin with a loss of case marking. Over time, through language contact and trade, such languages often lose their case system and move to a more fixed word order, where constituent order indicates thematic roles. In contrast, languages that begin with no case system typically begin with a fixed word order and rarely move to more flexible word order. This particular phenomenon, namely the use and subsequent loss of something approximating a case system, has not been robustly demonstrated in gestural paradigms (see Hall et al. 2013), and may be an important piece in the discussion of word order in emergent systems. We speculate that NSL has not yet settled on a word order because changes in the language require multiple generations. The emergence of structure is not protracted over hundreds of thousands of years because human learners share common mechanisms and biases. What is easier for one person to learn will be similarly easier for others to learn. However, whatever children’s learning biases may be, their effect on the language may not show up immediately. In the case of word order, in the initial stage, the first generation of signers generated sentences on the fly. Propelled by the primary desire to communicate, the initial process was messy, drawing on different strategies to string elements together in a sentence. When the second generation of signers entered the community as children, they were exposed to this set of utterances. Like Simon, the second group of children filtered the input, discovered patterns across the set of sentences, and turned them into more consistent rules. While the children’s biases did not override the input (e.g., Bickerton 1981, 1984), they were applied to the input the children received, input richer than that of the generation of signers before them. Unlike for homesigners, this process was happening across groups of children, with shared biases, converging on a common system.

651

652

Annemarie Kocab & Ann Senghas

Contrary to what recent theorists have argued, we observe that while the findings from gestural language creation in the laboratory may be suggestive, the evidence for whether a particular word order is the natural initial state in emerging languages is far from clear. ABSL and NSL are cases where we can observe the creation of language by child learners in a community with generational transmission, something previous cases have lacked. With these three ingredients, we can follow a dynamic system and see how it emerges. As a language is built, there are interactions between different systems, such as the interaction between word order and morphology, with cascading reorganizational changes across generations. For example, there are changes over the first two cohorts of NSL in word order, with no single dominant word order, perhaps interacting with the development of another device for indicating argument structure. With this device, there may be less pressure for the language to converge on a fixed, consistent word order. Restricting our discussion to word order, particularly when considering emerging natural languages like ABSL and NSL, where the ecology of the language is changing as it develops, would give an incomplete picture of the interacting subsystems of language. As hinted at in our discussion of case marking in spoken languages, word order is not the sole way to indicate roles in an event. Sign languages have available another device to indicate who did what to whom: the use of space, or spatial agreement/​morphology.

28.3.8  Spatial agreement/​morphology In sign languages, space can be used to indicate co-​reference, and can function as a link between verbs and their noun arguments. Nouns are indexed to specific areas in the physical signing space; verbs then can ‘agree’ with their arguments by referencing those spatial locations (loci, Friedman 1975; Klima & Bellugi 1979; also see Quer, Chapter 5, and Perniss, Chapter 17). Verbs can change the direction of movement or hand orientation so that the sign begins at the locus associated with the subject and ends at the locus associated with the object (Fischer 1975; Fischer & Gough 1978; Klima & Bellugi 1979; Supalla 1982; Padden 1983; Lillo-​Martin & Klima 1990; Meir 1998). In this way, these agreeing verbs mark the person and number of arguments (subject, object, or both; Liddell 1977; Padden 1988). Semantically, they often describe transfer events (e.g., give, advise). Note that not all verbs in sign languages agree in this way (Padden 1983, 1988; Sandler & Lillo-​Martin 2006). In a second class of verbs, spatial verbs, the movement indicates movement from a source location to a goal, and generally do not mark person or number inflections. Spatial verbs are typically verbs of motion (e.g., roll, go-​by-​car). In yet a third class, plain verbs, there is no directional marking. Plain verbs are often anchored to the body, do not inflect for person or number, and are not phonologically shaped by their arguments. Semantically, plain verbs are typically verbs of thinking, emotion, or states (e.g., believe, feel). Deaf children acquire this system of verb agreement and pronominal reference with present referents at around age 3;0 to 3;6 (Meier 1982, 1987), though some data suggest it is acquired earlier (Quadros & Lillo-​Martin 2007; see also Hosemann, Chapter 6). Use of verb agreement with non-​present referents appears much later in deaf children’s signing, around age 5 (Hoffmeister 1978, 1987; Loew 1984). Some have argued that this delay in the acquisition of non-​present referents with spatial locations is not due to a failure to identify agreement, but to a difficulty with setting up and maintaining spatial loci for non-​present referents (e.g., Newport & Meier 1985). We now turn to a discussion of the emergence of these systems in new languages. 652

653

Language emergence

28.3.8.1  Spatial agreement/​morphology in ABSL Padden et al. (2010) investigated the emergence of spatial morphology in ABSL and Israeli Sign Language (ISL), a young sign language that emerged in Israel around 75 years ago, within a growing community of deaf individuals from a variety of language backgrounds. The present Israeli Deaf community includes around 10,000 signers, across four generations. The study included nine ABSL signers, five from the second generation and four from the third generation; and 24 ISL signers, 11 from the first generation (who come from varied linguistic backgrounds and are not native users of ISL), nine from the second generation, and four from the third generation (who also learned Hebrew and are bilingual). For signs to be spatial, there must be a spatial contrast across, or movement along, an axis in the signing space. The sagittal or Z-​axis is employed by movements directed toward or away from the front of the body; the horizontal or X-​axis is employed by movements from right-​to-​left, or left-​to-​right, in front of the signer; and the diagonal or X+Z-​axis is employed by movements toward or away from the body diagonally, to the right or left side. In this study, the signed verbs were coded with respect to the axis they employed. ABSL signers from all three generations showed a preference for the Z-​axis for potentially agreeing and spatial verbs, though for younger signers the preference was weaker. When verbs were produced along a diagonal X+Z-​axis, the ABSL signers did not assign their arguments to locations in space. In such sentences, the verb cannot be said to agree with the noun arguments (Padden et al. 2010). ISL signers, like ABSL signers, exhibited a preference for the Z-​axis for both agreeing and spatial verbs, but also used the X-​axis and X+Z-​axis frequently. In comparisons across the three generations, Padden et  al. (2010) observed that younger ISL signers show a shift in axis preference, using the X-​and X+Z-axes more frequently than the Z-​axis. Following these differences in the youngest generation, Padden et al. (2010) then looked at their use of space to mark verb agreement. Again, the youngest signers showed a different pattern from the previous two generations. The youngest signers produced verbs that agreed with two arguments around half of the time, and with one argument a quarter of the time. In contrast, over half of the older signers’ responses did not exhibit any inflection and they generally produced verbs that agreed spatially with only one argument, and rarely for two. Taken together, these results show that while verb agreement marking has yet to develop in ABSL, a system of agreement has been gradually emerging in ISL as the language was transmitted over its first three generations of signers.

28.3.8.2  Spatial agreement/​morphology in NSL Senghas et al. (1997) examined whether signers employed a morphological distinction, in addition to word order, to indicate agent and patient roles (or Subject and Object). First-​ cohort signers did not produce many nouns indexed to specific locations. Moreover, the directionality of the verb movement was not used consistently (e.g., in a sentence, the direction of movement in push and get-​pushed was not coordinated as we might expect if directionality were indicating agreement). In contrast, second-​cohort signers were consistent in the direction of the verb, relying on this morphological distinction to indicate agreement. While no clear word order pattern was visible in NSL, there were generational differences between cohorts in the spatial marking of the arguments of the verb to indicate grammatical relations. 653

654

Annemarie Kocab & Ann Senghas

In a later study, using data from Senghas et al. (1997), Senghas (2003) more closely investigated the emergence of spatial morphology in NSL, looking at four first-​cohort signers and four second-​cohort signers. Signers were shown video clips, all with a man and two women seated at a table. The actors engaged in different actions, like coughing, tapping, and giving. For example, an event could depict a woman in the center giving a cup to a man on the left side of the screen (see Figure 28.1). Relative to the signer’s perspective, there are two possible spatial layouts that could be employed in signed representations of this event. In an unrotated layout the man would be positioned to the signer’s left, and would receive a cup from the right. In a rotated layout, the man would be positioned to the signer’s right, and would receive a cup from the left.

(a) stimulus event: a woman giving a cup to a man

(b) unrotated representation

(c) rotated representation

Figure 28.1  Illustration of unrotated (b) and rotated representations (c) of a giving event. The event is represented in (a) by a frame from the video stimulus; the diagrams beneath the event represent the signer and the signing area as viewed from above, with a semi-​circle representing the signing space in front of the signer. The movement of the verb is represented with an arrow, and the implied location of the man is marked with an X. This location is compared with the man’s location in the event to determine the rotation of the representation (from Senghas 2003; © Elsevier, reprinted with permission)

Signers from the first cohort were not consistent as a group, using both layouts. Two signers used both unrotated and rotated layouts, a third somewhat consistently used a rotated layout, and the fourth somewhat consistently used an unrotated layout. In contrast, second-​cohort signers showed a largely consistent preference for the rotated layout. Senghas (2003) also elicited comprehension data, presenting six signers from each cohort with the elicited sentences from the production study and asking them to select, from 654

655

Language emergence

a set of pictures, any events that matched the description. The pattern from the comprehension study aligned with the production data. Signers from the first cohort gave unrestricted interpretations, indicating that a description with a spatially modulated verb could equally describe the original target event and a left-​to-​right reversed version. Signers from the second cohort gave more restricted interpretations, where a description with a spatially modulated verb was interpreted to describe only the rotated representation of the event, and not the reverse. Over two generations of signers, NSL has evidently developed a morphological system for marking verb agreement, where noun arguments are indexed to locations in space, and the direction of the verb indicates the agent and patient roles. Importantly, it is the younger, more recent signers who use verb agreement consistently and in a restricted manner. Older signers do produce verbs that move directionally, suggesting that the seeds of morphological agreement are in place, but do not use this directionality grammatically to mark the argument roles in an event.

28.3.8.3  Discussion Across the second and third generations of signers, verbs in ABSL are produced along a diagonal axis, but their arguments are not assigned to spatial loci. Signers seem to consistently use SOV order (including SV and OV) to convey information about the grammatical roles of the arguments. NSL, in contrast, seems to have converged on a consistent system for the use of spatial modulations to linguistically mark the different semantic roles of agent and patient in events. Flaherty (2014) found that across homesigners and three age cohorts of NSL signers, utterances that used no spatial co-​reference were three times more likely to use SOV order than OSV order. In contrast, utterances that did utilize spatial co-​reference were produced with SOV and OSV orders with around the same frequency. While NSL does not exhibit the same word order across generations, younger signers have capitalized on the affordances of the manual modality to convey grammatical information spatially. Given the dynamic between word order and spatial agreement over language emergence, we can see how the input available to one child at one time point may be quite different from the input available to a child who came before or after.

28.3.9  Summary of ABSL and NSL and future directions These investigations of two different domains, in two emerging sign languages, NSL and ABSL, have yielded evidence for different patterns and timescales of the emergence of structure. Briefly, NSL does not exhibit a strongly dominant word order preference, beyond that for verb-​final orders (SOV and OSV). However, the foundations for a morphological verb agreement system, which serves to indicate who did what in an event, appears to be in place by the second generation of signers. In light of this emerging morphological system, the prevalent use of verb-​final word orders in NSL is possibly made clearer. ABSL does not have spatial verb agreement/​morphology, instead using a fixed SOV word order to convey grammatical information about roles. Why do we observe these differences in structure at different levels of organization in ABSL and NSL? One possibility is that the tenfold difference in the sizes of the two communities of signers influences the rate of convergence on linguistic forms. The greater number of unrelated deaf signers in the NSL community creates a community where its members interact with a large number of communication partners of varying familiarity. 655

656

Annemarie Kocab & Ann Senghas

The need for displaced communication with signers who may not share knowledge about home or neighborhood matters may place greater pressure on the language to develop systematic ways of grammatically expressing information. Another consideration is that the ABSL community has a large number of hearing signers who learn and use the language. The presence of hearing signers in the community, who are fluent or native users of another, spoken language, may affect the emergence of this village sign language. Developmental studies with signing children have shown that some aspects of grammar, such as spatial agreement/​morphology, are acquired later. While only a conjecture (though a testable one), it is possible that if children do indeed play a pivotal role in the development of a language as we have argued throughout, more complex aspects of grammar, which are acquired later, may also be slower to emerge in a new language, particularly if second language learners or users of other languages make up a significant portion of the population.

28.4  Conclusion Throughout this chapter, we have discussed different theoretical proposals for language emergence, highlighting the role of child learners. Child learners approach the task of acquiring language differently than adults do, such as by filtering noisy input rather than learning it faithfully (e.g., Hudson Kam & Newport 2009). As is seen in the cases of Simon and homesigners, across very different learning environments, child learners surpass their input, even when it is sparse, creating more systematic output. Passing a language, or the beginnings of a language, through a child’s mind can filter out the ‘noise’ in a system, yielding more regular, consistent structure. However, a single child does not have all of the tools to create a new language from its raw beginnings. A single, brief childhood does not allow for restructuring. Ultimately, it is successive generations of child learners who drive the emergence of structure, and shape a language to be more and more learnable. With generational transmission, language is repeatedly filtered through children’s minds and restructured. This learning and restructuring of language over generations captures evolution in action. As the modifications that offer greater reproductive advantages are selected by evolution, so too are the parts of the language that are more easily acquired. Through this process, languages are shaped to be more and more learnable by children. In addition, children have evolved to learn language in their early years (and experience a decline in this ability as they mature into adulthood, e.g., Lenneberg 1967; Newport 1990) and are highly sensitive to input, even when degraded, in the environment. These three factors –​groups of child learners, interacting in a community, where there is opportunity for generational transmission –​are necessary for language emergence. The cases of NSL and ABSL reveal the kind of structures that emerge when all three ingredients are present. Across investigations of different domains in these two languages, here, word order and spatial agreement morphology (elsewhere, temporal language, manner, and path), we see that the changes that increase the complexity of the language are introduced by groups of children (Senghas et al. 2004; Kocab et al. 2015, 2016). Importantly, we are able to witness how a language grows because the changes are unidirectional, observed only in later generations of signers. Older signers seem to be unable to acquire these new changes, using instead older forms or alternative strategies. Thus, a cross-​sectional slice through the community today lays out the history of the language, across generations. We can compare each group of signers to uncover how the language has grown and changed over time. 656

657

Language emergence

With each generation’s contribution, the ecology of the language shifts, with cascading effects across domains. For instance, in NSL, the development of spatial morphology interacted with convergence on word order, both of which serve to indicate the arguments in a sentence. Moreover, we see the effects of modality, both in natural languages and in laboratory experiments, which shape the emergence of these systems. By capitalizing on the affordances of the manual modality (spatial agreement), the pressure to converge on a different strategy for indicating arguments (word order) lessens. The dynamic of these two systems is seen in the interlocking changes from the first cohort to the second cohort of NSL signers. For this reason, it does not make sense to ask what the ‘defaults’ might be in any one domain of the language. The answer to the question of when and where structure in language arises may differ across domains. Far from seeking a single answer, we must adopt a multi-​pronged approach, integrating perspectives and methodologies from different disciplines. More direct testing in a controlled laboratory context may help draw out answers to some of the open questions. The use of more naturalistic learning and transmission contexts, as well as comparative studies using both manual and non-​manual paradigms, might allow researchers to test hypotheses about the effects of modality on the emergence of structure. Further studies comparing the performance of children to that of adults using artificial language learning paradigms may also help uncover how different biases in different learners affects structure in language. Additionally, richer linguistic investigation of existing data, such as probing whether semantic classes of verbs may account for some of the differences in syntactic behaviors, may provide a more comprehensive understanding of the present structure. Returning to the broad theories that opened our chapter, the evidence gathered to date suggests that the story of why humans are able to create and acquire language is more nuanced than any single theory has been able to capture. The proposals vary along several lines, the most prominent of which is the nature of the child’s endowment and the mechanisms by which it shapes language: ranging from extensive innate syntax, to conceptual structure, to learning biases. Whatever the right answer, it must entail some characteristic of children that differs from adults. Children clearly play a driving role in the process of language creation, building new systems over generations, not millennia. Rather than learning and reproducing language faithfully, each new generation of children presents an opportunity for reanalysis and reshaping of the language, adding structure and increasing regularity. The open question is not whether or when a language evolved, but rather what it is about children that gives a language its characteristics. The rapid emergence of language suggests that learners and creators must be bringing something to the table: innate biases or structures (though not necessarily a default syntax) that guide the emergence of language. At the same time, language does not arise in a social vacuum. Human minds are bound together in highly interconnected, communicating social networks. Our goal in signing and speaking with each other is not to teach or impart language, but to communicate and send meaningful messages. Differences in community structure can affect the rate of language emergence, as evidenced by the observed differences between homesigns, NSL, ISL, and ABSL. What is it about children that allows them to acquire and create language? While evolution has equipped children with learning mechanisms well suited to this task (e.g., Pinker & Bloom 1990), we do not need to posit an extreme nativist child, equipped with full syntactic rules and categories (cf. Pinker 1984). In other words, all of the rules, or 657

658

Annemarie Kocab & Ann Senghas

possible rules, of all languages do not need to be previously specified in the child. Perhaps we can get sufficiently far by appealing to a combinatorial conceptual system that guides our expectations about the world combined with powerful learning mechanisms. Decades of infant work have shown that we come into this world with certain expectations about how the physical world operates, including whether objects will float or fall, move spontaneously, or disappear (e.g., Baillargeon 1987; Spelke 1990; Spelke et  al. 1992; Carey 2009). Infants have a rich understanding of humans as intentional agents, who act efficiently to achieve their goals (e.g., Gergely & Csibra 2003). Children’s conceptual structure gives them a starting place that enables them to learn more about the world and others as they live in it –​for example, having the ability to group and separate objects in the world supports further learning about those objects. In a similar way, children may come into the world with expectations about the nature of the linguistic elements that they will encounter, expectations that guide further learning. Three ingredients are necessary for the emergence of a language:  child learners, a social community, and intergenerational transmission. The three pieces fit together to give rise to the complex structure found in language. Child learners are equipped with biases that make learning certain kinds of constructions likely. Interaction within a social community provides pressure to build a communicative system, and to converge on a common set of vocabulary and rules. Generational transmission provides an opportunity for cascading reorganization across groups of learners. While the precise nature of the mechanisms that children are equipped with remain to be discovered, it is clear that, in the right social context, children are uniquely well suited to language learning, and are the driving force behind language emergence.

Acknowledgments We thank all of the many language consultants and research participants who have contributed to our understanding of the languages discussed here, particularly our Nicaraguan colleagues and friends. Thank you for welcoming us into your community, and enthusiastically exploring your language with us.

Notes 1 Note that Chomsky’s poverty of the stimulus argument was not intended to answer the question of where the structure in language comes from or how it originated. 2 This is a domain-​specific nativist account, where the ability to learn is unique to one domain of functioning (here, language). This is in contrast to domain general accounts, where the learning mechanisms are used across many domains. 3 Following the conventions of the typological literature, we use S, O, and V in a semantic sense, corresponding to agents, patients, and actions of events. Note that in our discussion of dominant or preferred word orders, we consider only transitive utterances (those with a verb and two arguments), and exclude intransitive ones (with only one argument). 4 The local dialect of spoken Arabic uses SVO; classical Arabic, VSO; Hebrew, SVO or verb-​initial orders; and ISL, SVO, OSV, or, on rare occasions, SOV (Sandler et al. 2005).

References Adone, Dany & Anne Vainikka. 1999. Long distance WH-​movement in child Mauritian Creole. In Michel DeGraff (ed.), Language creation and language change: Creolization, diachrony and development, 75–​94. Cambridge, MA: MIT Press.

658

659

Language emergence Aitchison, Jean. 1996. Small steps or large leaps? Undergeneralization and overgeneralization in creole acquisition. In Herman Wekker (ed.), Creole languages and language acquisition, 9–​31. Berlin: Mouton de Gruyter. Arends, Jacques. 1989. Syntactic developments in Sranan. Nijmegen: University of Nijmegen PhD dissertation. Arends, Jacques, Pieter Muysken, & Norval Smith (eds.). 1995. Pidgins and creoles: An introduction (Vol. 15). John Benjamins. Baillargeon, Renee. 1987. Young infants’ reasoning about the physical and spatial properties of a hidden object. Cognitive Development 2. 179–​200. Baker, Mark. 2003. Lexical categories:  Verbs, nouns and adjectives. Cambridge:  Cambridge University Press. Bates, Elizabeth. 1984. Bioprograms and the innateness hypothesis. Behavioral and Brain Sciences 7. 188–​190. Bever, Thomas. 1970. The cognitive basis for linguistic structures. In John R. Hayes (ed.), Cognition and the development of language, 279–​362. New York: Wiley. Bickerton, Derek. 1977. Change and variation in Hawaiian English. Vol. 2:  Creole syntax. Honolulu: University of Hawaii, Social Science and Linguistics Institute. Bickerton, Derek. 1981. Roots of language. Ann Arbor, MI: Karoma Press. Bickerton, Derek. 1984. The language bioprogram hypothesis. Behavioral and Brain Sciences 7. 173–​221. Bruyn, Adrienne, Pieter Muysken, & Maaike Verrips. 1999. Double-​object constructions in the creole languages:  Development and acquisition. In Michel DeGraff (ed.), Language creation and language change:  Creolization, diachrony, and development, 329–​373. Cambridge, MA: MIT Press. Bruyn, Adrienne & Tonjes Veenstra. 1993. The creolization of Dutch. Journal of Pidgin and Creole Languages 8(1). 29–​80. Carey, Susan. 2009. The origin of concepts. New York: Oxford University Press. Carrigan, Emily M. & Marie Coppola. 2017. Successful communication does not drive language development: Evidence from adult homesign. Cognition 158. 10–​27. Chen Pichler, Deborah. 2001. Word order variation and acquisition in American Sign Language. Storrs, CT: University of Connecticut PhD dissertation. Chomsky, Noam. 1959. A review of BF Skinner’s Verbal Behavior. Language 35. 26–​58. Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press. Chomsky, Noam. 1968. Language and mind. New York: Harcourt, Brace & World. Chomsky, Noam. 1972. Language and mind. New York: Harcourt, Brace & World. Chomsky, Noam. 1995. The minimalist program. Cambridge, MA: MIT Press. Chomsky, Noam. 2000. New horizons in the study of language and mind. Cambridge: Cambridge University Press. Chouinard, Michelle & Eve Clark. 2003. Adult reformulations of child errors as negative evidence. Journal of Child Language 30. 637–​669. Christiansen, Morten H. & Nick Chater. 2008. Language as shaped by the brain. Behavioral and Brain Sciences 31. 489–​558. Coppola, Marie & Elissa Newport. 2005. Grammatical subjects in home sign: Abstract linguistic structure in adult primary gesture systems without linguistic input. Proceedings of the National Academy of Sciences 102. 19249–​19253. DeCamp, David. 1971. Toward a generative analysis of a post-​creole speech continuum. In Dell Hymes (ed.), Pidginization and creolization of languages, 349–​370. Cambridge:  Cambridge University Press. DeGraff, Michel. 1992. Creole grammars and acquisition of syntax. Pennsylvania: University of Pennsylvania PhD dissertation. DeGraff, Michel. 1999. Creolization, language change, and language acquisition: A prolegomenon. In Michel DeGraff (ed.), Language creation and language change: Creolization, diachrony, and development, 146. Cambridge, MA: MIT Press. Dowty, David. 1991. Thematic proto-​roles and argument selection. Language 67. 547–​619. Dryer, Matthew. 2013. Order of subject, object and verb. In Matthew S. Dryer & Martin Haspelmath (eds.), The world atlas of language structures online. Leipzig: Max Planck Institute for Evolutionary Anthropology. Available online at: http://​wals.info/​chapter/​81.

659

660

Annemarie Kocab & Ann Senghas Evans, Nicholas & Stephen Levinson. 2009. The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and Brain Sciences 32. 429–​448. Fischer, Susan. 1975. Influences on word-​order change in American Sign Language. La Jolla, CA: Salk Institute for Biological Studies. Fischer, Susan & Bonnie Gough. 1978. Verbs in American Sign Language. Sign Language Studies 18.  17–​48. Flaherty, Molly. 2014. The emergence of argument structural devices in Nicaraguan Sign Language. Chicago, IL: University of Chicago PhD dissertation. Friedman, Lynn A. 1975. Space, time, and person reference in American Sign Language. Language 51. 940–​961. Gell-​Mann, Murray & Merritt Ruhlen. 2011. The origin and evolution of word order. Proceedings of the National Academy of Sciences 108. 17290–​17295. Gergely, György & Gergely Csibra. 2003. Teleological reasoning in infancy: The naive theory of rational action. Trends in Cognitive Sciences 7. 287–​292. Gibson, Edward, Steve Piantadosi, Kimberly Brink, Leon Bergen, Eunice Lim, & Rebecca Saxe. 2013. A noisy-​channel account of crosslinguistic word order variation. Psychological Science 24. 1079–​1088. Gleitman, Lila & Elissa Newport. 1995. The invention of language by children: Environmental and biological influences on the acquisition of language. In Lila R. Gleitman & Mark Liberman (eds.), An invitation to cognitive science, 125. Cambridge, MA: MIT Press. Gold, E. Mark. 1967. Language identification in the limit. Information and Control 10. 447–​474. Goldin-​Meadow, Susan. 2003. The resilience of language: What gesture creation in deaf children can tell us about how all children learn language. New York: Psychology Press. Goldin-​ Meadow, Susan & Carolyn Mylander. 1983. Gestural communication in deaf children: Noneffect of parental input on language development. Science 221. 372–​374. Goldin-​Meadow, Susan & Carolyn Mylander. 1998. Spontaneous sign systems created by deaf children in two cultures. Nature 391. 279–​281. Goldin-​Meadow, Susan, Carolyn Mylander, & Cynthia Butcher. 1995. The resilience of combinatorial structure at the word level: Morphology in self-​styled gesture systems. Cognition 56. 195–​262. Goldin-​Meadow, Susan, Wing Chee So, Asli Özyürek, & Carolyn Mylander. 2008. The natural order of events: How speakers of different languages represent events nonverbally. Proceedings of the National Academy of Science 105. 9163–​9168. Hall, Robert A. 1962. The life cycle of pidgin languages. Lingua 11. 151–​156. Hall, Robert A. 1966. Pidgin and creole languages (Vol. 7). Ithaca, NY: Cornell University Press. Hall, Matthew, Rachel Mayberry, & Victor Ferreira. 2013. Cognitive constraints on constituent order: Evidence from elicited pantomime. Cognition 129. 117. Halliday, Michael. 1967. Notes on transitivity and theme in English: Part 2. Journal of Linguistics 3. 199–​244. Hoffmeister, Robert. 1978. The development of demonstrative pronouns, locatives, and personal pronouns in the acquisition of American Sign Language by deaf children of deaf parents. Minneapolis: University of Minnesota PhD dissertation. Hoffmeister, Robert. 1987. The acquisition of pronominal anaphora in ASL by deaf children. In Barbara Lust (ed.), Studies in the acquisition of anaphora, 171–​187. Dordrecht: Reidel. Holm, John. 1988. Pidgins and creoles (Vol. 1):  Theory and structure. Cambridge:  Cambridge University Press. Hudson Kam, Carla & Elissa Newport. 2009. Getting it right by getting it wrong: When learners change languages. Cognitive Psychology 59. 30–​66. Hymes, Dell. 1971. Sociolinguistics and the ethnography of speaking. In Edwin Ardener (ed.), Social anthropology and language, 47–​93. London: Tavistock. James, William. 1890. The principles of psychology, Vol. 2. New York: Henry Holt and Company. Kirby, Simon, Hannah Cornish, & Kenny Smith. 2008. Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language. Proceedings of the National Academy of Sciences 105. 10681–​10686. Klima, Edward & Ursula Bellugi. 1979. The signs of language. Cambridge, MA:  Harvard University Press.

660

661

Language emergence Kocab, Annemarie, Hannah Lam, & Jesse Snedeker. 2017. Cars crashing and people hugging: The effect of animacy on word order in gestural language creation. Cognitive Science 42. 918–​938. Kocab, Annemarie, Jennie Pyers, & Ann Senghas. 2015. Referential shift in Nicaraguan Sign Language: A transition from lexical to spatial devices. Frontiers in Psychology 5. 15–​40. Kocab, Annemarie, Ann Senghas, & Jesse Snedeker. 2016. The emergence of temporal language in Nicaraguan Sign Language. Cognition 156. 147–​163. Koopman, Hilda & Claire Lefebvre. 1981. Haitian Creole pu. In Pieter Muysken (ed.), Generative studies on creole languages, 201–​221. Dordrecht: Foris. Kouwenberg, Silvia. 1990. Complementizer PA, the finiteness of its complements, and some remarks on empty categories in Papiamento. Journal of Pidgin and Creole Languages 5(1). 39–​51. Langus, Alan & Marina Nespor. 2010. Cognitive systems struggling for word order. Cognitive Psychology 60. 291–​318. Lefebvre, Claire. 2001. Relexification in creole genesis and its effects on the development of the creole. In Norval Smith & Tonjes Veenstra (eds.), Creolization and contact, 9–​42. Amsterdam:  John Benjamins Lenneberg, Eric H. 1967. The biological foundations of language. Hospital Practice 2(12). 5967. Liddell, Scott K. 1977. An investigation into the syntactic structure of American Sign Language. San Diego, CA: University of California PhD dissertation. Lillo-​Martin, Diane & Edward Klima. 1990. Pointing out differences: ASL pronouns in syntactic theory. In Susan D. Fischer & Patricia Siple (eds.), Theoretical issues in sign language research, 191–​210. Chicago: University of Chicago Press. Loew, Ruth. 1984. Roles and reference in American Sign Language: A developmental perspective. Minneapolis: University of Minnesota PhD dissertation. Lumsden, John S. 1989. On the distribution of determiners in Haitian Creole. Revue Québécoise de Linguistique 18(2). 64–​93. MacWhinney, Brian. 1977. Starting points. Language 53. 152–​168. Marcus, Gary. 1993. Negative evidence in language acquisition. Cognition 46. 53–​85. McWhorter, John H. 2001. The world’s simplest grammars are creole grammars. Linguistic Typology 5(2). 125–​166. Meier, Richard. 1982. Icons, analogues, and morphemes:  The acquisition of verb agreement in American Sign Language. San Diego, CA: University of California PhD dissertation. Meier, Richard. 1987. Elicited imitation of verb agreement in American Sign Language: Iconically or morphologically determined? Journal of Memory and Language 26. 362–​376. Meir, Irit. 1998. Syntactic-​semantic interaction in Israeli Sign Language verbs: The case of backwards verbs. Sign Language & Linguistics 1(1). 337. Mufwene, Salikoko Sangol & Marta Dijkhoff. 1988. On the so-​called ‘infinitive’ in Atlantic creoles. Lingua 77. 297–​330. Mühlhäusler, Peter. 1986. Pidgin and creole linguistics. Oxford: Blackwell. Muysken, Pieter. 1981. Creole tense/​mood/​aspect systems: The unmarked case? In Pieter Muysken (ed.), Generative studies on creole languages, 181–​199. Dordrecht: Foris. Napoli, Donna Jo & Rachel Sutton-​ Spence. 2014. Order of the major constituents in sign languages: Implications for all language. Frontiers in Psychology 15. 373. Newport, Elissa. 1988. Constraints on learning and their role in language acquisition: Studies of the acquisition of American Sign Language. Language Sciences 10. 147–​172. Newport, Elissa. 1990. Maturational constraints on language learning. Cognitive Science 14.  11–​28. Newport, Elissa & Richard Meier. 1985. The acquisition of American Sign Language. In Dan I. Slobin (ed.), The crosslinguistic study of language acquisition. Volume 1:  The data. 881–​938. Hillsdale, NJ: Lawrence Erlbaum. Osgood, Charles. 1980. Lectures on language performance. New York: Springer-​Verlag. Padden, Carol. 1983. Interaction of morphology and syntax in American Sign Language. San Diego, CA: University of California PhD dissertation. Padden, Carol. 1988. Grammatical theory and signed languages. In Frederick J. Newmeyer (ed.), Linguistics:  The Cambridge survey, linguistic theory:  Extensions and implications, 250–​266 Cambridge: Cambridge University Press. Padden, Carol, Irit Meir, Mark Aronoff, & Wendy Sandler. 2010. The grammar of space in two new sign languages. In Diane Brentari (ed.), Sign languages (Cambridge Language Surveys), 570–​592. Cambridge: Cambridge University Press.

661

662

Annemarie Kocab & Ann Senghas Pinker, Steven. 1979. Formal models of language learning. Cognition 7. 217–​283. Pinker, Steven. 1984. Language learnability and language development. Cambridge, MA: Harvard University Press. Pinker, Steven. 1989. Learnability and cognition: The acquisition of argument structure. Cambridge, MA: MIT Press. Pinker, Steven. 1994. The language instinct. New York: Harper Perennial Modern Classics. Pinker, Steven & Paul Bloom. 1990. Natural language and natural selection. Behavioral and Brain Sciences 13. 707–​784. Polich, Laura. 2005. The emergence of the deaf community in Nicaragua. Washington, DC: Gallaudet University Press. Prince, Ellen F. 1981. Towards a taxonomy of given-​new information. In Peter Cole (ed.), Radical pragmatics, 223–​254. New York: Academic Press. Quadros, Ronice M. de & Diane Lillo-​Martin. 2007. Gesture and the acquisition of verb agreement in sign languages. In Heather Caunt-​ Nulton, Samantha Kulatilake, & I-​ hao Woo (eds.), Proceedings of the Thirty-​First Annual Boston Conference on Language Development (BUCLD), 520–​531. Somerville, MA: Cascadilla Press. Reinecke, John E. 1969. Language and dialect in Hawaii: A sociolinguistic history to 1935. Honolulu, HI: University of Hawai‘i Press. Roberts, Sarah J. 2002. The role of identity and style in creole development: Evidence from Hawaiian Creole. Paper presented at the Annual Meeting of the Society for Pidgin and Creole Linguistics, San Francisco. Romaine, Suzanne. 1988. Pidgin and creole languages. London: Longman. Sandler, Wendy & Diane Lillo-​Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press. Sandler, Wendy, Irit Meir, Carol Padden, & Mark Aronoff. 2005. The emergence of grammar:  Systematic structure in a new language. Proceedings of the National Academy of Sciences 102. 2661–​2665. Sankoff, Gillian. 1979. The genesis of a language. In Kenneth C. Hill (ed.), The genesis of language, 23–​47. Ann Arbor, MI: Karoma. Sankoff, Gillian & Suzanne Laberge. 1973. On the acquisition of native speakers by a language. Kivung 6. 32–​47. Senghas, Ann. 2003. Intergenerational influence and ontogenetic development in the emergence of spatial grammar in Nicaraguan Sign Language. Cognitive Development 18(4). 511–​531. Senghas, Ann, Marie Coppola, Elissa Newport, & Ted Supalla. 1997. Argument structure in Nicaraguan Sign Language: The emergence of grammatical devices. Proceedings of the Boston University Conference on Language Development 21. 550–​561. Senghas, Ann & Marie Coppola. 2001. Children creating language: How Nicaraguan Sign Language acquired a spatial grammar. Psychological Science 12. 323–​328. Senghas, Ann, Sotaro Kita, & Asli Özyürek. 2004. Children creating core properties of language: Evidence from an emerging sign language in Nicaragua. Science 305. 1779–​1782. Singler, John. 1990. Pidgin and creole tense-​mood-​aspect systems. Amsterdam: John Benjamins. Singler, John. 1992. Nativization and pidgin/​creole genesis: A reply to Bickerton. Journal of Pidgin and Creole Languages 7. 319–​333. Singleton, Jenny & Elissa Newport. 2004. When learners surpass their models: The acquisition of American Sign Language from inconsistent input. Cognitive Psychology 49. 370–​407. Spelke, Elizabeth. 1990. Principles of object perception. Cognitive Science 14. 29–​56. Spelke, Elizabeth, Karen Breinlinger, Janet Macomber, & Kristen Jacobson. 1992. Origins of knowledge. Psychological Review 99. 605–​632. Supalla, Ted. 1982. Structure and acquisition of verbs of motion and location in American Sign Language. San Diego, CA: University of California PhD dissertation. Tomasello, Michael. 1992. First verbs:  A case study of early grammatical development. Cambridge: Cambridge University Press. Tomasello, Michael. 1995. Language is not an instinct. Cognitive Development 10. 131–​156. Tomasello, Michael. 1999. The cultural origins of human cognition. Cambridge, MA:  Harvard University Press. Tomasello, Michael. 2000. The item-​based nature of children’s early syntactic development. Trends in Cognitive Sciences 4(4). 156–​163.

662

663

Language emergence Tomasello, Michael. 2008. Origins of human communication. Cambridge, MA: MIT Press. Tomasello, Michael, Ann Kruger, & Hilary Ratner. 1993. Cultural learning. Behavioral and Brain Sciences 16. 495–​552. Veenstra, Tonjes. 1996. Serial verbs in Saramaccan. Predication and creole genesis. Amsterdam: University of Amsterdam PhD dissertation. The Hague: Holland Academic Graphics. Woll, Bencie, Rachel Sutton-​ Spence, & Frances Elton. 2001. Multilingualism:  The global approach to sign languages. In Ceil Lucas (ed.), The sociolinguistics of sign languages, 832. Cambridge: Cambridge University Press.

663

664

29 WORKING MEMORY IN SIGNERS Experimental perspectives Marcel R. Giezen

29.1  Introduction Working Memory (WM) can generally be considered as the ensemble of components of the mind that hold information temporarily in a heightened state of availability for use in ongoing information processing (Cowan 2017). Because of its role in temporarily maintaining information for further processing, WM capacity is believed to play an important role in complex cognitive tasks such as reasoning, mathematics, reading, and language understanding. Although theoretical perspectives on WM capacity vary widely, a common distinction is between simple tasks that only require the storage of information, and more complex tasks that require additional processing (manipulation) of information. For example, in the multicomponent model of WM (Repovš & Baddeley 2006; Baddeley 2007), the WM architecture is divided into two short-​term memory (STM) subsystems responsible for temporarily storing verbal/​acoustic information (the phonological loop) and visuospatial information (the visuospatial sketchpad), and an attentional control system (central executive), responsible for focusing and distributing attention, as well as switching between tasks and interfacing with long-​term memory. A  fourth component has been included in later versions of this model that allows for the multimodal integration of information between these three subsystems and long-​term memory (the episodic buffer, Baddeley (2000)). Alternative approaches consider WM as activated components of long-​ term memory (Cowan 2005), as a limited resource that can be flexibly allocated among items to be maintained in memory (e.g., Rönnberg et al. 2013; Ma et al. 2014), or focus on the role of individual differences in attentional control or temporal processing that potentially underlie WM capacity (e.g., Oberauer 2002; Barrouillet et al. 2004; Engle & Kane 2004). In addition to these more general theories of WM, several different models of phonological and visual STM have been proposed (for discussion, see, e.g., Page & Norris (1998); Klauer & Zhao (2004); Jones et  al. (2006)). Nonetheless, the multicomponent model is one of the most influential models of WM and in particular the phonological loop has informed many experimental studies in language acquisition and processing (see e.g., Baddeley et al. (1998); Baddeley (2003); Archibald (2017); Pierce et al. (2017)

664

665

Working memory in signers

for discussion). Because the focus of this chapter is on experimental perspectives on WM in signers, I will not go further into the differences between these WM models and their relative strengths or weaknesses (for discussion, see, e.g., Miyake & Shah (1999); Andrade (2001); Cowan (2017); Adams et al. (2018); Baddeley et al. (2018)). More importantly, as we will see, experimental research on WM in signers suggests that speakers and signers have access to the same set of core WM components and associated processes, although they may differ in the relative reliance on some of these components and processes (see Hall & Bavelier (2010) for an excellent overview). In reviewing this research, evidence from various experimental paradigms will be discussed, including digit spans and letter spans, sentence spans, and free recall tasks. The chapter is organized as follows. In the next section, I will first briefly sketch the architecture of phonological STM for speech-​based and sign-​based information (Section 29.2). Section 29.3 discusses possible modality differences in the amount of phonological information that speakers and signers are able to maintain in STM. The following two sections will review experimental evidence from more general linguistic and symbolic WM measures in signers (Section 29.4), and evidence from studies on non-​linguistic visuospatial WM (Section 29.5). In Section 29.6, I will discuss a small set of experimental studies that investigated the role of spatial and temporal organization in WM for signers. The final section provides an integrated summary of key findings from experimental studies of WM in signers, and a brief discussion of important methodological considerations and the role of WM in sign language processing.

29.2  The architecture of phonological STM for a visuospatial language Phonological STM primarily refers to the capacity to hold a limited amount of phonological information in memory, usually thought to be limited in its capacity to 7 ± 2 items (see Cowan (2001) for discussion). That is, it is mainly used for rapid and temporal sequential storage. In the multicomponent model of WM, this is accomplished through phonological encoding of the to-​be-​remembered information in a phonological buffer. Importantly, the encoded information decays with time, and therefore the buffer relies on a subvocal articulatory rehearsal mechanism that refreshes the phonological information and thus permits active maintenance of the information (see van der Hulst & van der Kooij, Chapter 1, for phonological structure of signs). Bellugi et al. (1975) provided early evidence that deaf signers remember linguistic information in a visual, sign-​based form, instead of, for example, recoding signs into English words. When they had to remember and recall lists of American Sign Language (ASL) signs, deaf ASL signers were more likely to make phonological errors (misremembering target signs as phonologically similar signs) than, for example, semantic errors (misremembering signs as semantically related signs). Similarly, signers recall fewer signs for lists with phonologically similar signs than for lists with phonologically dissimilar signs (Wilson & Emmorey 1997). Furthermore, presentation of irrelevant visuospatial input, such as pseudosigns or moving non-​nameable shapes, during a retention interval after encoding but before recall, reduces memory performance for deaf signers (but not for hearing non-​signers), suggesting sign-​based phonological information is encoded in a visual format in STM (Emmorey & Wilson 2003). Evidence for a sign-​based articulatory rehearsal mechanism comes from studies that found sign length effects –​better memory for lists with shorter signs than lists with longer signs  –​and articulatory suppression

665

666

Marcel R. Giezen

effects  –​poorer recall when producing irrelevant hand movements during encoding  –​ in deaf signers (e.g., MacSweeney et al. 1996; Wilson & Emmorey 1998). Finally, both speakers and signers recall items in serial positions at the beginning and at the end of the list more accurately in serial recall tasks than items in between, that is, they exhibit primacy and recency effects (e.g., Rönnberg et al. 2004; Bavelier et al. 2008a; Rudner & Rönnberg 2008). Importantly, neuro-​imaging studies have provided further support for these general similarities in STM architecture for speech-​based and sign-​based information (see Rudner et al. (2009) for review). Moreover, similar effects have been observed for non-​signers during the recall of unfamiliar gestures consisting of sign-​like combinations of locations, handshapes, and movements (Wilson & Fox 2007), and of meaningless manual gestures (Rudner 2015), suggesting that rehearsal processes in WM may not be restricted to the maintenance of linguistic information, but extend to non-​linguistic information when it can be motorically reproduced.

29.3  Modality effects in phonological STM 29.3.1  Evidence from serial recall tasks Despite structural similarities in phonological STM for speech-​based and sign-​based information, many studies have reported better performance on serial recall tasks for hearing speakers than for deaf signers (e.g., Bellugi et al. 1975; Krakow & Hanson 1985; Logan et al. 1996) and for words than for signs in hearing signers (Rönnberg et al. 2004; Geraci et  al. 2008). In serial recall tasks, participants are presented with series of lexical items (for example, between four and six) that they have to recall in the order in which they were presented. One possible explanation for why spoken or printed words are more easily recalled than signs in serial recall tasks is that, in general, signs take longer to articulate than words. Because the phonological information in STM capacity decays with time and is refreshed through articulatory rehearsal, the number of items that speakers can successfully remember is a function of word length and articulation rate. However, studies have shown that articulatory duration has relatively limited effects on recall performance and that STM capacity differences in speech and sign persist when differences in articulatory duration are controlled for (Boutla et al. 2004; Geraci et al. 2008; Mann et  al. 2010; Hall & Bavelier 2010, 2011; Marshall et  al. 2011). Moreover, modality differences in STM capacity have not only been observed when comparing memory for lexical signs and printed or spoken words, but also with stimuli that are less likely to differ in articulatory duration, such as digits and letters. Wilson et al. (1997) investigated the ability of deaf (both native and non-​native) children and hearing children between eight and eleven years old to recall lists of (signed or spoken) digits in either forward or backward order. They found larger forward and backward digit spans for the native deaf children than for the non-​native deaf children. Furthermore, the native deaf children had smaller forward digit spans than the hearing children, but higher backward digit spans. This modality difference for forward digit spans has also been found for adult deaf signers and hearing speakers (Boutla et al. 2004; Bavelier et al. 2008a), as well as for hearing ASL-​English bilinguals (Hall & Bavelier 2011). However, because digits are phonologically similar in many sign languages, but not in spoken languages, it is possible that the STM advantage for speech is merely a

666

667

Working memory in signers

by-​product of phonological (dis)similarity. To investigate this possibility, Boutla et  al. (2004) compared STM spans for fingerspelled ASL letters (see Figure 29.1) and audiovisual English digits, which are both phonologically dissimilar. Using these stimuli, they still found (i) a larger STM span for hearing speakers than for native deaf signers, and (ii) a larger STM span for English digits than for ASL letters in ASL-English bilinguals. However, others have pointed out that digits may have a special status in STM and therefore cannot be directly compared to letters (Wilson & Emmorey 2006a, 2006b). In addition, although articulation rate might be more similar for spoken and signed digits and/​ or letters than lexical signs and words, another concern is that digits and letters are not complete phonological signs and are produced without movement in many sign languages (Geraci et al. 2008).

Figure 29.1  Example of a serial recall sequence in the ASL letter span task used by Emmorey et al. (2017). Participants had to recall in order fingerspelled letter sequences of increasing length presented at a rate of one letter per second

More recently, Andin et al. (2013) compared STM spans between carefully matched deaf native and early Swedish Sign Language (SSL) signers and hearing Swedish speakers for printed digits as well as letters, controlled for phonological similarity and articulatory duration. They found similar STM spans for letters, but lower spans for digits in the signers. However, in a second experiment with native and early British Sign Language (BSL) signers and hearing English speakers, they did not find a significant difference in digit span between the two groups, possibly because of differences in the educational background of SSL signers and BSL signers. Specifically, BSL signers may have been more likely to recode the printed digits into speech-​based representations than SSL signers. Possible recoding of signs into speech-​based representations is a particular concern when testing hearing signers. For example, van Dijk et al. (2012) presented interpreters of Dutch and Sign Language of the Netherlands (NGT) with a span task with signs and spoken words and did not find a modality difference in a standard forward serial recall condition. However, the same participants also completed the task under oral articulatory suppression and manual articulatory suppression. Whereas manual suppression did not affect recall scores in either modality, oral suppression decreased recall scores in both modalities, suggesting that the interpreters recoded the signs into speech-​based representations or relied on speech-​based components (mouthing) of the signs for storage and maintenance in STM (cf. García-​Orza & Carratalá 2012; Liu et al. 2016).

667

668

Marcel R. Giezen

29.3.2  The role of recall direction Based on the absence of a difference between forward and backward recall in their digit span study for the deaf children, Wilson et al. (1997) speculated that deaf children might rely less on temporally-​based sequencing (which would benefit forward recall performance) than hearing children, and might instead rely on visuospatial encoding of sign-​based information in STM. However, it should be noted that Bavelier et al. (2008a) found the typical pattern of better forward than backward recall performance in their study with adult deaf and hearing ASL signers. Furthermore, while Andin et  al. (2013) found no difference in forward versus backward digit or letter recall for adult deaf SSL signers, they unexpectedly also found no difference for adult hearing Swedish speakers. The authors suggest this may have been because all stimuli were presented visually, which may have reduced recency effects in forward recall (cf. Cowan et al. 2004). It therefore appears that differential directionality effects for speech-​based and sign-​based information in STM are less robust than modality effects in forward recall.

29.3.3  Different stages of STM processing: encoding, rehearsal, and recall Most recent studies that compared STM spans between signers and speakers presented signed and spoken stimuli, respectively, allowing participants to do the task using their preferred mode of communication. However, this also introduces potential confounds between, for example, modality differences in the memory for auditory and visual traces and modality differences in speech-​based versus sign-​based phonological encoding (see Andin et al. (2013) for discussion). Although, for example, print presentation allows for better control over similarities in perceptual input in STM tasks, it introduces another potential confound. Because print is not the primary code of communication for both speakers and signers, the presented stimuli need to be recoded into a speech-​based or sign-​based phonological code. This recoding process may be easier and/​or faster in one modality than in the other modality. To directly investigate this issue, Hall & Bavelier (2011) conducted a very clever study with hearing bilingual ASL-​English signers, in which they manipulated the modality of perception, encoding and recall of ASL and English digits. For example, they asked their participants to recode signed sequences into English, or encode spoken digit sequences into ASL, and then recall the sequences either in ASL or English. The results showed that recoding sign stimuli into a speech-​ based code resulted in a longer span than encoding the sign stimuli directly into a signed code. Presentation of the digits in a speech-​based code as opposed to a sign-​based code further enhanced recall performance. In contrast, signed recall was associated with longer spans than spoken recall, regardless of the modality of encoding. It thus seems that advantages for speech-​based codes in STM tasks are restricted to specific stages of STM, more specifically, presentation and encoding, whereas the recall stage may show a preference for a sign-​based code. The advantage for sign-​based recall is not entirely clear, although it might be related to the absence of interference between phonological information in STM and auditory feedback during spoken recall (Hall & Bavelier 2011). The suggestion that modality differences differentially impact separate stages of STM is consistent with neuroimaging findings by Bavelier et  al. (2008b). In this study, deaf

668

669

Working memory in signers

signers more strongly activated areas associated with passive memory storage (ventral prefrontal cortex) during encoding and maintenance, whereas hearing speakers more strongly activated areas related to executive processes associated with the retention of temporal order and attention switching (dorsal regions in the inferior-​parietal cortex). In contrast, at the recall stage, these executive control areas were more strongly activated by the signers than the speakers.

29.3.4  The role of serial maintenance The findings by Hall & Bavelier (2011) and Bavelier et al. (2008b) suggest that the advantage for speech-​based information in STM tasks might be specifically related to the temporal organization demands of serial recall tasks. One possibility is that the auditory modality is particularly effective in maintaining temporal order (e.g., Watkins et al. 1992; Kanabus et al. 2002), whereas the visual modality is more effective in maintaining spatial information (e.g., O’Connor & Hermelin 1973). As a consequence, speakers and signers may differentially encode order information in STM, with the former mainly relying on temporal encoding and the latter on spatial encoding (e.g., Wilson 2001). Alternatively, speakers may rely more on temporal chunking of units and articulatory rehearsal than signers do, which helps them to maintain temporal order in serial recall tasks (Hall & Bavelier 2010). Compelling evidence for this second explanation comes from studies showing similarities in recall performance between deaf signers and hearing speakers when the task does not require the retention of temporal order (Hanson 1982). In free recall tasks, participants are typically presented with a list of items that exceeds their STM capacity (‘supra-​span list’; for example, 16 items). After all items are presented, they have to recall as many items from the list as possible, irrespective of their position in the list. Bavelier et al. (2008a) replicated the absence of modality differences in a free recall task with hearing ASL-​English bilinguals. Interestingly they also found that hearing signers were more likely to (spontaneously) recall items in the order in which they had been presented (i.e., items at earlier positions in the list were recalled before items at later positions in the list) when they performed the task in English than in ASL. Alba (2016) obtained similar results in a study of serial recall (forward and backward sign and word spans) and free recall with deaf Catalan Sign Language (LSC) signers and hearing Catalan speakers. In free recall, deaf signers were less likely than hearing speakers to recall items in the order in which they had been presented. In contrast to Bavelier et al. (2008a), however, the deaf signers in this study still recalled fewer items than the hearing speakers in the free recall task. Hirshorn et al. (2014) also reported lower free recall scores for deaf signers than hearing speakers. Given the mixed findings across these studies, it is difficult to draw clear conclusions from free recall tasks about STM capacity differences for speech and sign. Gozzi et al. (2011) took a slightly different approach to investigate the retention of temporal order in deaf signers. They presented deaf Italian Sign Language (LIS) signers and hearing non-​signers matched on age, gender, and education with sequences of signs and spoken or written words for serial recall that were one item longer than each participant’s previously established individual span length. They scored errors as either item orders or order errors and calculated their respective proportions corrected for list length. When

669

670

Marcel R. Giezen

challenged with this above-​capacity task, deaf signers and hearing speakers did not differ significantly in the proportion of item errors or order errors. Furthermore, Miozzo et al. (2016) concluded that deaf signers relied on similar positional encoding strategies as hearing speakers to retain signs in STM. This suggests that other factors than differences in temporal order processing per se may be responsible for the lower STM sign spans. It has been suggested, for example, that signs are phonologically ‘heavier’ than words because many of their properties are encoded simultaneously, which in turn makes signs more sensitive to decay in the phonological storage buffer (Geraci et al. 2008; Gozzi et al. 2011; Cecchetto et al. 2017). Alternatively, signs may have more phonological ‘degrees of freedom’ than words, because sign languages generally have a larger phonological feature inventory and fewer restrictions on the combinations of phonological features than spoken languages (Marshall et al. 2011). Consistent with this idea, Mann et al. (2010) showed that deaf children had substantial difficulty with a non-​sign repetition task, a task that is considered to tap into phonological STM but does not require serial recall. Malaia & Wilbur (2018) recently proposed another intriguing explanation for the apparent difficulty of signers with maintaining serial order in WM. According to some researchers, serial order in verbal WM is closely linked to spatial attention. Specifically, they suggest that the brain makes use of an internal spatial template to bind the items that have to be remembered (the ‘mental whiteboard hypothesis’), which can be searched and used for retrieval through internal spatial attention (for discussion, see, e.g., Abrahamse et  al. (2014, 2017)). Malaia & Wilbur (2018) hypothesize that the spatial processing resources recruited to facilitate serial order maintenance in verbal WM are already being used by signers to encode linguistic properties of the sign such as location and movement. As a consequence, they are not able to use these spatial resources for serial rehearsal. Geraci et al. (2008) put forward a related idea, namely that the more static visuospatial information of signs is maintained in visuospatial STM, while the dynamic sequential information in the movement of the sign is maintained in phonological STM. The recruitment of both storage components by signers would arguably represent a heavier cognitive load on the central executive than the activation of a single subsystem by speakers. These ideas around potential competition for spatial and serial rehearsal processes in STM provide a novel and promising perspective on modality differences in WM that will hopefully generate new empirical investigations in this domain.

29.4  Evidence from other linguistic and symbolic WM measures Most early experimental studies of WM in signers have used STM measures, and the phonological loop in particular. Phonological STM is a critical component of WM capacity and has been shown to correlate strongly with language learning abilities in hearing children and deaf children with cochlear implants (e.g., Gathercole 2006; Pisoni et  al. 2016; Pierce et al. 2017). However, its predictive power of more general language processing in adult speakers is still under debate (e.g., Caplan & Waters 1999; Lauro et al. 2010; Cecchetto & Papagno 2011; Papagno & Cecchetto 2018). Furthermore, phonological STM abilities may account for only a small portion of unique variance in general WM capacity (e.g., Kane et al. 2004; Cowan et al. 2005).

670

671

Working memory in signers

More general WM measures include complex spans such as reading spans, listening spans, and operation spans (see e.g., Conway et al. (2005), for discussion). These WM measures generally correlate well with language and reading comprehension measures for adults (e.g., Daneman & Merikle 1996; Caplan & Waters 2005; Daneman & Hannon 2007; Link et al. 2014), although such correlations may to some extent reflect individual differences in, for example, fluid intelligence (Engle 2018). Similar to simple STM spans, complex spans involve the temporary storage of information, but in addition, they require manipulation of the stored information. For example, in reading and listening spans participants have to read aloud or listen to series of sentences and remember and later recall the last word of each sentence. The number of sentences increases in number as the task progresses. In the operation span task, participants have to check an increasing number of simple equations with single digits (for example, consisting of a multiplication and addition) and state whether the equations are correct or not. They also have to remember and later recall an unrelated item (for example, a letter or a word) presented after the equation. There have been relatively few studies comparing complex spans for deaf signers and hearing speakers. Boutla et  al. (2004) constructed a signing span task and a speaking span task, in which native deaf ASL signers and hearing English speakers were presented with series of signs and spoken words that increased in length as the task progressed. After all items at a given length had been presented, participants had to recall as many words or signs as possible in a self-​generated sentence. Critically, the order of recall of the words or signs was irrelevant. While the same hearing speakers had larger STM spans than the deaf signers, the two groups did not differ significantly on the production span task. Alamargot et al. (2007) compared deaf students who were French Sign Language (LSF) signers and hearing French students (non-​signers) on a written and spoken/​signed version of the production span task and also found no significant difference on this WM measure between the two groups despite larger STM spans for the hearing students. Importantly, the production span tasks used by Boutla et al. (2004) and Alamargot et al. (2007) did not require serial recall of the presented signs and words. That is, participants were free to produce the remembered items in whichever order they liked, similar to the free recall tasks discussed earlier. In contrast, Marschark et al. (2016) reported larger WM spans for hearing signers and hearing speakers than for deaf signers on two complex WM tasks (reading span and operation span) with verbal written stimuli that did require serial recall. However, poorer performance by the deaf signers in this study could have been due (at least in part) to the use of written English materials, which may have increased the task demands for deaf participants. Emmorey et al. (2017) recently reported data from deaf and hearing ASL signers on several memory span tasks, including a complex sign span task comparable to a listening span task. Specifically, participants had to judge the plausibility of videotaped ASL sentences and had to recall the final sign of each sentence after the full sequence of sentences at a given level had been presented (see Figure 29.2). While deaf signers had shorter forward letter spans (STM) than hearing English speakers, and hearing signers recalled more letters in English than ASL, no modality effects were found for the complex span task.

671

672

Marcel R. Giezen

Figure 29.2  Example of a serial recall sequence in the ASL sentence span task used by Emmorey et al. (2017). In this task, participants were presented with increasing sets of sentences. They had to: (1) decide for each sentence in a set whether the sentence was semantically plausible or implausible (using a foot response to avoid possible effects of articulatory suppression), and (2) remember the last sign of each sentence (marked by an arrow in the figure). At the end of the sequence, they had to recall in order the final signs of each sentence

Wang & Napier (2013) compared the performance of hearing and deaf native and non-​native signers on a similar complex sign span task adapted for Australian Sign Language (Auslan). All hearing signers were interpreters. Although the hearing signers outperformed the deaf signers, recall scores did not differ between native and non-​ native signers, and hearing status and age of sign language acquisition did not interact. Furthermore, Andin et  al. (2013), in two separate studies, reported similar operation spans for native or early deaf SSL and BSL signers compared to hearing Swedish and British speakers. Although this task required serial recall (of the stated answers to the presented equations), it is not clear whether the participants stored and/​or manipulated the presented information in a phonological code. In addition to complex spans, the n-​back paradigm is used in the literature as a more general WM measure. In this paradigm, stimuli are presented serially and the participant has to match each stimulus to the stimulus that was presented n-​steps back in the series. Neuroimaging studies of WM often make use of this task because it does not require overt recall (Owen et al. 2005). Several n-​back neuroimaging studies have been conducted with signers (see Rudner (2018) for a review). These studies again indicate 672

673

Working memory in signers

overall similarities in the neural architecture of WM for speech and sign, although the superior temporal cortex appears to be more strongly engaged in WM processing by deaf signers than hearing speakers, which may reflect differences in sensorimotor processing or (e.g., Ding et al. 2015; Cardin et al. 2017). As has been observed for hearing speakers, signers’ performance on n-​back tasks is sensitive to WM load (n), signal degradation, and familiarity of the stimuli (Rudner 2018). In summary, the findings from studies that included more general WM measures in signers suggest that the often-​reported advantage for speakers on linguistic STM tasks that require serial recall (i.e., digit, letter, or word spans) may not extend to complex WM span tasks that require storage and processing of linguistic stimuli. That is, significant differences between signers and speakers are most apparent when the task involves the (short-​term) serial maintenance of phonological information.

29.5  Modality effects in visuospatial WM? Given the evidence for modality differences in serial maintenance of phonological information in STM tasks, it seems a natural question to ask whether signers and speakers also perform differently on tasks that require the serial maintenance of non-​linguistic visual information. Alternatively, because of their experience with a visuospatial language and given other reported effects of auditory deprivation and/​or sign language use on, for example, peripheral visual attention, mental imagery and transformation, and face recognition (for excellent reviews, see, e.g., Emmorey & McCullough (2009) and Dye & Bavelier (2010)), signers might outperform non-​signers on visuospatial WM tasks. Compared to studies of phonological STM in signers, there are relatively few studies that specifically investigated visuospatial STM processes in deaf and/​or hearing signers. Most of these studies used a variation of the Corsi blocks task, which assesses the ability to maintain and recall a visuospatial sequence of events. These events consist of block tapping sequences on a two-​dimensional or three-​dimensional board with nine irregularly positioned cubes. The sequences, and thus the amount of visuospatial information that has to be remembered, increase in length as the task progresses. Participants respond manually by tapping the correct blocks in the same (or reverse) order as they were presented. The sequences can be presented by an experimenter (live or video recorded) or as computerized sequences. Accordingly, participants recall the sequences on a model of the Corsi blocks in front of them or by using touch screen responses. Furthermore, task versions may differ in the path configurations of the sequences (e.g., Busch et al. 2005). Studies that compared performance by deaf and hearing children and adults on the Corsi blocks (forwards and/​or backwards recall) have either reported similar scores (e.g., Logan et al. 1996; Falkman & Hjelmquist 2006; Koo et al. 2008; Marschark et al. 2015), or better scores for deaf signers (e.g., Wilson et al. 1997; Geraci et al. 2008; Lauro et al. 2014). Furthermore, hearing children who were learning sign language outperformed peers without sign language experience (Capirci et al. 1998). While Keehner & Gathercole (2007) found no difference in performance between a group of non-​native BSL-​English interpreters and a group of English non-​signers on a standard version of the Corsi blocks test, the signers outperformed the non-​signers on a version of the task that entailed 180 degree rotation, that is, when the task also required a mental transformation. In addition, Emmorey et al. (2017) found no difference in Corsi blocks performance between deaf or hearing ASL signers and English non-​signers. 673

674

Marcel R. Giezen

In contrast to visuospatial STM, little is known about visuospatial WM capacity in deaf or hearing signers. López-​Crespo et al. (2012) compared performance of Swedish deaf children with different communication modes (SSL, Swedish, or bilingual SSL-​ Swedish) and hearing children on a delayed matching-​to-​sample task in which a visual sample stimulus (a Kanji letter) was presented on the screen, followed by a comparison stimulus after 0 (no delay) or 4000 (delayed matching) milliseconds. The task for the children was to judge whether the comparison and sample stimuli matched or not. Across both conditions, accuracy was higher for the bilingual children and hearing children than for the deaf SSL-​only and Swedish-​only children. Rudner et al. (2016) found that adult deaf signers outperformed hearing adult non-​signers with either normal or poor hearing on a card-​pair matching task. Marshall et  al. (2015) compared native and non-​native deaf children’s performance on the Corsi blocks and a spatial odd-​one-​out task. In this latter task, the children had to identify the odd one out on a display with three shapes, two of which were identical. In addition, they had to remember the location of this shape and later recall these in order by pointing to corresponding empty boxes on the screen. The native deaf children and hearing children did not differ significantly on either task, but the non-​native deaf children scored lower than the hearing children on the backwards condition of the Corsi blocks and the odd-​one-​out task. Finally, Emmorey et al. (2017) tested deaf ASL signers, hearing ASL-​English bilingual signers, and hearing English non-​signers on a visuospatial complex span task and found no significant group differences. In this spatial span task, participants were presented with sequences of rotated letters on the screen and first had to decide whether each letter was presented in normal or mirrored format. In addition, they had to remember the spatial orientation of each letter. The length of the letter sequences increased as the task progressed. During recall, a grid with eight squares that represented the different possible orientations of the letter was shown on the computer display, and participants responded manually by clicking on the correct squares in the order in which the letters had been presented (see Figure 29.3).

Figure 29.3  Example of a serial recall sequence in the spatial span task used by Emmorey et al. (2017). In this task, participants were presented with sequences of increasing length of normal or mirrored stimulus letters presented in different spatial orientations. Participants had to: (1) decide for each letter whether it was normal or mirrored (using a foot response to avoid possible effects of articulatory suppression), and (2) remember the spatial orientation of the top of each letter. At the end of the sequence, they had to recall in order the spatial orientation of the letters by clicking one of eight boxes on a spatial grid

674

675

Working memory in signers

In summary, results are mixed regarding differences in visuospatial STM and WM capacity between signers and non-​signers. However, whenever group differences are observed, they generally indicate better performance for signers. Importantly, this pattern of results strongly suggests that smaller STM capacity for signers is limited to phonological information and does not extend to non-​linguistic visuospatial information. Of course, it is still possible that signers and non-​signers are both at a disadvantage in visuospatial serial recall tasks if, as suggested earlier, the visual modality is simply not very well suited for temporal order processing.

29.6  Beyond modality-​specific storage and recall A small set of studies not yet discussed so far have provided a very useful new perspective on the role of temporal and spatial processing in WM for deaf signers by manipulating the amount of spatial and/​or temporal information that was available to participants. Rudner & Rönnberg (2008) conducted a series of intriguing experiments in which they manipulated the cognitive demands during presentation, encoding, and recall to tap into modality-​specific differences in WM associated with explicit processing demands (Rönnberg et al. 2013). I will limit the discussion here to their third experiment, which was the only experiment that revealed modality differences. In this experiment, deaf signers and hearing speakers had to recall lists consisting of nine easily nameable pictures that differed in inter-​item semantic similarity. The study included three different presentation conditions. In the spatial presentation condition, the nine pictures were simultaneously presented in a spatial array. In the temporal presentation condition, the pictures were presented sequentially, one at a time, within consecutive cells of the spatial array. In the mixed presentation condition, the items were presented serially in the spatial array. A distractor task preceded recall that prevented rehearsal. Item and order recognition were independently probed during recall, and items were either cued in the order in which they were presented or cued in random order. Although semantic inter-​item similarity and temporal presentation facilitated item recognition for deaf signers and hearing speakers, only the speakers benefited from the serial order of recognition cues (regardless of presentation condition), and order recognition was generally slower for the deaf participants than the hearing participants, suggesting that, in contrast to hearing speakers, retrieval for deaf signers is not sensitive to temporal organization. Using the same design, Rudner et al. (2010) furthermore found that older deaf signers actually performed worse on order recognition in the temporal presentation condition than matched older hearing speakers as well as the younger deaf signers from Rudner & Rönnberg (2008), suggesting that, with age, deaf signers are less able to retain order information in the context of high temporal processing demands. Recently, Hirshorn et  al. (2012) developed three different adaptations of the Corsi blocks task to test the role of spatiotemporal indexing, which is hypothesized to take place in the episodic buffer of STM. In one version, deaf signers and hearing non-​signers watched a sequence of squares presented at different locations on the screen and were presented with probe displays that presented the full array of squares with an arrow linking two of the squares that had appeared consecutively in time during the presentation phase. Participants had to decide whether these two squares had been presented in the order that the arrow indicated. In the second version, printed English letters or ASL fingerspelled letters were superimposed on each square, to allow phonological encoding

675

676

Marcel R. Giezen

of the sequences. In the third version, only the two letters that had appeared consecutively in time during presentation were presented in the center of the probe display, reducing the usefulness of spatiotemporal indexing, but still allowing for phonological encoding. The results showed that, whereas deaf signers outperformed hearing non-​signers on the first version that encouraged the use of spatiotemporal indexing, the same subjects did not differ significantly on the second version, which allowed for both spatiotemporal and phonological encoding. In the third version, which prohibited spatiotemporal indexing and encouraged phonological encoding, the opposite pattern emerged, and the hearing non-​signers outperformed the deaf signers. These results strongly suggest that the availability of spatiotemporal information benefits order maintenance in signers, whereas the availability of phonological information in STM benefits order maintenance in speakers.

29.7  So where does this leave the experimental study of WM in signers? The remainder of this chapter addresses several important methodological considerations that may have contributed to some of the divergent findings reported in the literature on WM capacity in signers, and briefly discusses what we know about the role of WM in sign language processing.

29.7.1  Participant considerations One of the most obvious considerations when comparing the results of different studies are variations in sample size. This is especially relevant in the context of experimental studies with deaf or hearing signers because it can be very difficult to find a sufficient number of study participants, especially if they have to fulfill certain selection criteria, for example, in age of acquisition or educational background. The sample sizes for some of the studies presented in this chapter were quite small and included fewer than 15 signers (e.g., Keehner & Gathercole 2007; Koo et  al. 2008; Rudner & Rönnberg 2008; Gozzi et al. 2011; Hirshorn et al. 2012; Wang & Napier 2013), which reduces statistical power and limits generalizability of the findings. However, it should be noted that some of the studies with larger samples of deaf signers included both native signers and signers who started acquiring a sign language in childhood (e.g., Wilson & Emmorey 1997; Geraci et al. 2008; Emmorey et al. 2017), who may be more likely to rely on mixed coding strategies (Wilson et al. 1997; Marschark & Mayer 1998; Rudner & Rönnberg 2008). In addition, as discussed by Andin et  al. (2013), even for some of the studies with relatively large samples of adult native signers, it is often not clear whether the groups may have differed in, for example, non-​verbal IQ or educational background. The few studies that investigated modality effects within a sample of hearing signers are interesting in that respect, because these automatically control for such individual differences. However, testing bilingual populations introduces other possible confounds, for example, translation strategies (e.g., García-​Orza & Carratalá 2012; James & Gabriel 2012; van Dijk et al. 2012; Wang & Napier 2013; Liu et al. 2016), and differences in relative language proficiency in the two language modalities (Hall & Bavelier 2011).

29.7.2  Task considerations Unfortunately, practically any methodological aspect of an experimental STM or WM task can influence participant performance and prevent direct comparison of findings 676

677

Working memory in signers

from studies that used different methodologies. This ranges from clear methodological differences in, for example, presentation modality (e.g., print versus spoken/​signed presentation) and the nature of the stimuli (e.g., pictures versus spoken words/​signs) to relatively subtle differences in the timing of stimulus presentation (e.g., inter-​stimulus interval of items in span tasks) or the difficulty of the secondary task in complex span measures (e.g., solving an equation or judging the plausibility of a sentence). One particularly important aspect to consider is whether the stimulus presentation allows for multiple coding strategies, for example, a combination of spatial and temporal codes (Hirshorn et al. 2012). This may be especially relevant for STM studies that use fingerspelled letters, which may trigger both sign-​based and speech-​based encoding processes. For example, Sehyr et  al. (2017) found that adult deaf ASL signers exhibited English phonological similarity effects as well as ASL manual similarity effects in a serial recall study with fingerspelled words. Less evident perhaps, but equally important, are differences between studies in the scoring method. Simple and complex span tasks can be scored in different ways, and it has been argued that some scoring methods are more sensitive to individual differences in WM capacity than other methods (see, e.g., Conway et  al. (2005) and Friedman & Miyake (2005) for discussion). Most of the studies discussed in this chapter calculated a single recall score (but see, e.g., van Dijk et  al. 2012; Andin et  al. 2013; Wang & Napier 2013). For simple spans, this is often the longest list length that was correctly recalled (the traditional scoring method). Based on a comparison of four common scoring methods for the reading span task, Friedman & Miyake (2005) argue that continuous scoring methods, for example, the total number of correctly recalled words across all sequences, have better distributional properties and higher correlations with scores on reading comprehension measures than traditional scoring methods.

29.7.3  The role of WM in sign language processing As discussed earlier, complex span tasks appear to be better predictors of more general language processing abilities of hearing speakers than STM tasks. Although experimental studies of STM and WM in signers are often firmly embedded in the discussion of modality differences in the processing of auditory-​vocal and visuospatial languages (see, e.g., Geraci et al. 2008; Marshall et al. 2011; Cecchetto et al. 2017), there have actually only been very few studies that directly investigated the relationship with language processing in signers. Mann et al. (2010) found positive correlations between British deaf children’s performance on a non-​sign repetition test, which is often used as a measure of phonological STM, and their BSL comprehension abilities. Similarly, Holmer et al. (2016) found significant associations between sign imitation and word reading, sign language comprehension and phonological awareness of signs in deaf and hard-​of-​hearing children. For adults, van Dijk et  al. (2012) reported significant correlations between STM word and sign spans under oral articulatory suppression and the quality of interpreted narratives by NGT-​Dutch interpreters, which they argue may reflect an enhanced ability to bind information from different sources in the episodic buffer of STM. Furthermore, Hirshorn et  al. (2015) investigated relationships between phonological knowledge and memory skills (serial recall and free recall) on reading comprehension in oral deaf participants and native deaf signers. Although the two groups performed similarly on both memory measures and had similar reading levels, free recall scores correlated significantly with reading comprehension for the native signers, while serial recall scores and reading comprehension correlated significantly for the oral deaf participants. According 677

678

Marcel R. Giezen

to the authors, this may reflect increased reliance on semantic instead of phonological processes for the native signers. In line with these findings, Emmorey et al. (2017) reported significant correlations between scores on a complex sentence span task with a serial recall component and scores on a narrative comprehension task in English, but not in ASL. Further, Marschark et al. (2016) did not find significant correlations between deaf and hearing signers’ performance on two linguistic complex WM spans and their self-​ rated expressive and receptive sign language abilities (Marschark et al. 2016). Although based on null findings, these results raise the possibility that sign language comprehension may rely less on serial order encoding than spoken language comprehension. Instead, the researchers in both studies speculate that signers may rely more on spatial WM resources during (sign) language comprehension. In summary, the few available studies that directly looked at the relationship between STM or WM performance and sign language processing reinforce the conclusion that simple or complex span tasks with a serial recall component are likely not very good predictors of more general sign language processing abilities (cf. Hall & Bavelier 2010).

29.8  Conclusion Four main, interrelated conclusions can be drawn from the body of evidence from experimental research on STM and WM in signers reviewed in this chapter. First, the strongest evidence for modality effects in STM involves larger phonological STM spans for speech-​ based versus sign-​based information. In contrast to phonological STM, speakers and signers often perform similarly on more general linguistic and symbolic measures of WM capacity such as production spans, comprehension spans, and operation spans. See Figure 29.4 for a schematic representation of modality effects in the landscape of WM tasks.

Figure 29.4  Schematic representation of different WM tasks along two dimensions: serial maintenance demands (horizontal axis) and linguistic demands (vertical axis). Symbol legend: diamonds = STM tasks; triangles = complex WM tasks. Color legend: black symbols = consistent modality effects reported; gray symbols = some studies report modality effects; unfilled symbols = generally no modality effects reported

678

679

Working memory in signers

Second, there is increasing evidence that this modality effect in phonological STM reflects a reduced reliance by signers on serial maintenance of phonological information. The underlying explanation for this is still unclear, but may be connected to potential domain-​general differences in information processing in the auditory and visual modality, domain-​specific differences in the phonological processing load of words and signs, or competition for spatial processing resources when encoding signs in WM. Third, in addition to phonological STM tasks, some studies have found that signers outperform non-​signers on visuospatial STM tasks, possibly as a result of signing experience and/​or perceptual reorganization in deaf signers. Methodological variation across studies in, for example, the modality of presentation and recall (e.g., print vs. sign), and background of the signers (e.g., chronological age, age of acquisition, hearing status), likely has contributed to these and other mixed findings reported in the experimental literature. Finally, the few available studies on the role of WM in sign language processing suggest that phonological STM tasks and other linguistic WM measures that rely on serial recall have limited value in predicting language comprehension in the visuospatial modality.

Acknowledgments Preparation of this chapter was supported by grants from the Spanish Ministry of Economy and Competitiveness (PSI2016-​76435-​P) and the European Union’s Horizon 2020 research and innovation program under grant agreement 654917.

References Abrahamse, Elger L., Jean-​Philippe van Dijck, & Wim Fias. 2017. Grounding verbal working memory: The case of serial order. Current Directions in Psychological Science 26(5). 429–​433. Abrahamse, Elger, Jean-​Philippe van Dijck, Steve Majerus, & Wim Fias. 2014. Finding the answer in space:  The mental whiteboard hypothesis on serial order in working memory. Frontiers in Human Neuroscience 8(932). Adams, Eryn J., Anh T. Nguyen, & Nelson Cowan. 2018. Theories of working memory: Differences in definition, degree of modularity, role of attention and purpose. Language, Speech & Hearing Services in Schools 49(3). 340–​355. Alamargot, Denis, Eric Lambert, Claire Thebault, & Christophe Dansac. 2007. Text composition by deaf and hearing middle-​school students: The role of working memory. Reading and Writing: An Interdisciplinary Journal 20(4). 333–​360. Alba de la Torre, Celia. 2016. Wh-​questions in Catalan Sign Language. Barcelona: University of Barcelona PhD dissertation. Andin, Josefine, Eleni Orfanidou, Velia Cardin, Emil Holmer, Cheryl M. Capek, Bencie Woll, Jerker Rönnberg, & Mary Rudner. 2013. Similar digit-​based working memory in deaf signers and hearing non-​signers despite digit span differences. Frontiers in Psychology 4(942). Andrade, Jackie. 2001. The working memory model: Consensus, controversy, and future directions. In Jackie Andrade (ed.), Working memory in perspective, 281–​310. New York: Psychology Press. Archibald, Lisa. 2017. Working memory and language learning: A review. Child Language Teaching and Therapy 33(1). 5–​17. Baddeley, Alan D. 2000. The episodic buffer:  A new component of working memory. Trends in Cognitive Sciences 4(11). 417–​423. Baddeley, Alan D. 2003. Working memory and language: An overview. Journal of Communication Disorders 36(3). 189–​208. Baddeley, Alan D. 2007. Working memory, thought and action. Oxford: Oxford University Press. Baddeley, Alan D., Susan E. Gathercole, & Costanza Papagno. 1998. The phonological loop as a language learning device. Psychological Review 105(1). 158–​173.

679

680

Marcel R. Giezen Baddeley, Alan D., Graham J. Hitch, & Richard J. Allen. 2018. From short-​ term store to multicomponent working memory: The role of the modal model. Memory & Cognition 47(4). 575–​588. Barrouillet, Pierre, Sophie Bernardin, & Valérie Camos. 2004. Time constraints and resource sharing in adults’ working memory spans. Journal of Experimental Psychology: General 133(1). 83–​100. Bavelier, Daphne, Aaron J. Newman, Malini Mukherjee, Peter C. Hauser, Stephan Kemeny, Allen Braun, & Mrim Boutla. 2008a. Encoding, rehearsal, and recall in signers and speakers: Shared network but differential engagement. Cerebral Cortex 18(10). 2263–​2274. Bavelier, Daphne, Elissa L. Newport, Matt Hall, Ted Supalla & Mrim Boutla. 2008b. Ordered short-​term memory differs in signers and speakers:  Implications for models of short-​term memory. Cognition 107(2). 433–​459. Bellugi, Ursula, Edward S. Klima, & Patricia Siple. 1975. Remembering in signs. Cognition 3(2). 93–​125. Boutla, Mrim, Ted Supalla, Elissa L. Newport, & Daphne Bavelier. 2004. Short-​term memory span: Insights from sign language. Nature Neuroscience 7(9). 997–​1002. Busch, Robyn M., Kathleen Farrell, Krista Lisdahl-​Medina, & Robert Krikorian. 2005. Corsi block-​tapping task performance as a function of path configuration. Journal of Clinical and Experimental Neuropsychology 27(1). 127–​134. Capirci, Olga, Allegra Cattani, Paolo Rossini, & Virginia Volterra. 1998. Teaching sign language to hearing children as a possible factor in cognitive enhancement. Journal of Deaf Studies and Deaf Education 3(2). 135–​142. Caplan, David & Gloria S. Waters. 1999. Verbal working memory and sentence comprehension. Brain and Behavioral Sciences 22(1). 77–​126. Caplan, David & Gloria S. Waters. 2005. The relationship between age, processing speed, working memory capacity and language comprehension. Memory 13(3–​4). 403–​413. Cardin, Velia, Mary Rudner, Rita F. De Oliveira, Josefine Andin, Merina T. Su, Lilli Beese, Bencie Woll, & Jerker Rönnberg. 2017. The organization of working memory networks is shaped by early sensory experience. Cerebral Cortex 28(10). 3540–​3554. Cecchetto, Carlo, Beatrice Giustolisi, & Lara Mantovan. 2017. Short-​term memory and sign languages: Sign span and its linguistic implications. Linguística: Revista de Estudos Linguísticos da Universidade do Porto 11. 59–​89. Cecchetto, Carlo & Costanza Papagno. 2011. Bridging the gap between brain and syntax. A case for a role of the phonological loop. In Cedric Boeckx & Anna Maria Di Sciullo (eds.), The biolinguistic enterprise: New perspectives on the evolution and nature of human language, 440–​460. Oxford: Oxford University Press. Conway, Andrew R.A., Michael J. Kane, Michael F. Bunting, D. Zach Hambrick, Oliver Wilhelm, & Randall W. Engle. 2005. Working memory span tasks: A methodological review and user’s guide. Psychonomic Bulletin & Review 12(5). 769–​786. Cowan, Nelson. 2001. The magical number 4 in short-​term memory: A reconsideration of mental storage. Brain and Behavioral Sciences 24(1). 87–​114. Cowan, Nelson. 2005. Working memory capacity. New York: Psychology Press. Cowan, Nelson. 2017. The many faces of working memory and short-​term storage. Psychonomic Bulletin & Review 24(4). 1158–​1170. Cowan, Nelson, Emily M. Elliott, J. Scott Saults, Candice C. Morey, Sam Mattox, Anna Hismjatullina, & Andrew R. A. Conway. 2005. On the capacity of attention: Its estimation and its role in working memory and cognitive aptitudes. Cognitive Psychology 51(1). 42–​100. Cowan, Nelson, J. Scott Saults, & Gordon D. A. Brown. 2004. On the auditory modality superiority effect in serial recall:  Separating input and output factors. Journal of Experimental Psychology: Learning, Memory, and Cognition 30(3). 639–​644. Daneman, Meredyth & Brenda Hannon. 2007. What do working memory span tasks like reading span really measure? In Naoyuki Osaka, Robert H. Logie, & Mark D’Esposito (eds.), The cognitive neuroscience of working memory, 21–​42. Oxford: Oxford University Press. Daneman, Meredyth & Philip M. Merikle. 1996. Working memory and language comprehension: A meta-​analysis. Psychonomic Bulletin and Review 3(4). 422–​433.

680

681

Working memory in signers Ding, Hao, Wen Qin, Meng Liang, Dong Ming, Baikun Wan, Qiang Li, & Chunshui Yu. 2015. Cross-​modal activation of auditory regions during visuo-​spatial working memory in early deafness. Brain 138(9). 2750–​2765. Dye, Matthew W.G. & Daphne Bavelier. 2010. Attentional enhancements and deficits in deaf populations: An integrative review. Restorative Neurology and Neuroscience 28(9). 181–​192. Emmorey, Karen & Stephen McCullough. 2009. The bimodal bilingual brain: Effects of sign language experience. Brain and Language 109(2). 124–​132. Emmorey, Karen, Marcel R. Giezen, Jennifer A. F. Petrich, Erin Spurgeon, & Lucinda O’Grady Farnady. 2017. The relation between linguistic and spatial working memory and language comprehension in the spoken and signed modality. Acta Psychologica 177. 69–​77. Emmorey, Karen & Margaret Wilson. 2003. The effect of irrelevant visual input on working memory for sign language. Journal of Deaf Studies and Deaf Education 8(2). 97–​103. Engle, Randall W. 2018. Working memory and executive attention:  A revisit. Perspectives on Psychological Science 13(2). 190–​193. Engle, Randall W. & Michael J. Kane. 2004. Executive attention, working memory capacity and two-​factor theory of cognitive control. In Brian Ross (ed.), The psychology of learning and motivation, vol. 43, 145–​199. New York: Elsevier. Falkman, Kerstin W. & Erland Hjelmquist. 2006. Do you see what I mean? Shared reference in non-​native, early signing children. Journal of Deaf Studies and Deaf Education 11(4). 410–​420. Friedman, Naomi P. & Akira Miyake. 2005. Comparison of four scoring methods for the reading span test. Behavior Research Methods, Instruments & Computers 37(4). 581–​590. García-​Orza, Javier & Patricia Carratalá. 2012. Sign recall by hearing signers: Evidences of dual coding. Journal of Cognitive Psychology 24(6). 703–​713. Gathercole, Susan E. 2006. Nonword repetition and word learning: The nature of the relationship. Applied Psycholinguistics 27(4), 513–​543. Geraci, Carlo, Marta Gozzi, Costanza Papagno, & Carlo Cecchetto. 2008. How grammar can cope with limited short-​term memory: Simultaneity and seriality in sign languages. Cognition 106(2). 780–​804. Gozzi, Marta, Carlo Geraci, Carlo Cecchetto, Marco Perugini, & Costanza Papagno. 2011. Looking for an explanation for the low sign span: Is order involved? Journal of Deaf Studies and Deaf Education 16(1). 101–​107. Hall, Matt L. & Daphne Bavelier. 2010. Working memory, deafness, and sign language. In Marc Marschark & Patricia E. Spencer (eds.). The Oxford handbook of deaf studies, language, and education, vol. 2, 458–​472. Oxford: Oxford University Press. Hall, Matt L. & Daphne Bavelier. 2011. Short-​term memory stages in sign vs. speech: The source of the serial span discrepancy. Cognition 120(1). 54–​66. Hanson, Vicki L. 1982. Short-​term recall by deaf signers of American Sign Language: Implications of encoding strategy for order recall. Journal of Experimental Psychology: Learning, Memory and Cognition 8(6). 572–​583. Hirshorn, Elizabeth A., Matthew W.G. Dye, Peter C. Hauser, Ted R. Supalla, & Daphne Bavelier. 2014. Neural networks mediating sentence reading in the deaf. Frontiers in Human Neuroscience 8(394). Hirshorn, Elizabeth A., Matthew W.G. Dye, Peter C. Hauser, Ted R. Supalla, & Daphne Bavelier. 2015. The contribution of phonological knowledge, memory, and language background to reading comprehension in deaf populations. Frontiers in Psychology 6(1153). Hirshorn, Elizabeth A., Nina M. Fernandez, & Daphne Bavelier. 2012. Routes to short-​term memory indexing:  Lessons from deaf native users of American Sign Language. Cognitive Neuropsychology 29(1–​2). 85–​103. Holmer, Emil, Mikael Heimann, & Mary Rudner. 2016. Imitation, sign language skill, and the Developmental Ease of Language Understanding (D-​ELU) model. Frontiers in Psychology 7(107). James, Jesse R. & Kara I. Gabriel. 2012. Student interpreters show encoding and recall differences for information in English and American Sign Language. Translation & Interpreting 4(1).  21–​37. Jones, Dylan M., Robert W. Hughes, & William J. Macken. 2006. Perceptual organization masquerading as phonological storage:  Further support for a perceptual-​gestural view of short-​term memory. Journal of Memory and Language 54(2). 256–​281.

681

682

Marcel R. Giezen Kanabus, Magdalena, Elzbieta Szelag, Ewa Rojek, & Ernst Poppel. 2002. Temporal order judgement for auditory and visual stimuli. Acta Neurobiologiae Experimentalis 62(4). 263–​270. Kane, Michael J., David Z. Hambrick, Stephen W. Tuholski, Oliver Wilhelm, Tabitha W. Payne, & Randall W. Engle. 2004. The generality of working memory capacity:  A latent-​variable approach to verbal and visuospatial memory span and reasoning. Journal of Experimental Psychology: General 133(2). 189–​217. Keehner, Madeleine & Susan E. Gathercole. 2007. Cognitive adaptations arising from nonnative experience of sign language in hearing adults. Memory and Cognition 35(4). 752–​761. Klauer, Karl Christoph & Zengmei Zhao. 2004. Double dissociations in visual and spatial short-​ term memory. Journal of Experimental Psychology: General 133(3). 355–​381. Koo, Daniel, Kelly Crain, Carol LaSasso, & Guinevere F. Eden. 2008. Phonological awareness and short-​term memory in hearing and deaf individuals of different communication backgrounds. Annals of the New York Academy of Sciences 1145(1). 83–​99. Krakow, Rena A. & Vicki L. Hanson. 1985. Deaf signers and serial recall in the visual modality: Memory for signs, fingerspelling, and print. Memory and Cognition 13(3). 265–​272. Lauro, Leonor J.  Romero, Marta Crespi, Costanza Papagno & Carlo Cecchetto. 2014. Making sense of an unexpected detrimental effect of sign language use in a visual task. Journal of Deaf Studies and Deaf Education 19(3). 358–​365. Lauro, Leonor J.  Romero, Janine Reis, Leonardo G. Cohen, Carlo Cecchetto, & Costanza Papagno. 2010. A case for the involvement of phonological loop in sentence comprehension. Neuropsychologia 48(14). 4003–​4011. Linck, Jared A., Peter Osthus, Joel T. Koeth, & Michael F. Bunting. 2014. Working memory and second language comprehension and production:  A meta-​analysis. Psychonomic Bulletin and Review 21(4). 861–​883. Liu, Hsiu Tan, Bonita Squires, & Chun Jung Liu. 2016. Articulatory suppression effects on short-​ term memory of signed digits and lexical items in hearing bimodal-​bilingual adults. Journal of Deaf Studies and Deaf Education 21(4). 362–​372. Logan, Karen, Murray Maybery, & Janet Fletcher. 1996. The short-​term memory of profoundly deaf people for words, signs, and abstract spatial stimuli. Applied Cognitive Psychology 10(2). 105–​119. López-​Crespo, Ginesa, María Teresa Daza, & Magdalena Méndez-​López. 2012. Visual working memory in deaf children with diverse communication modes:  Improvement by differential outcomes. Research in Developmental Disabilities 33(2). 362–​368. Ma, Wei Ji, Masud Husain, & Paul M. Bays. 2014. Changing concepts of working memory. Nature Neuroscience 17(3). 347. MacSweeney, Mairead, Ruth Campbell, & Chris Donlan. 1996. Varieties of short-​term coding in deaf teenagers. Journal of Deaf Studies and Deaf Education 1(4). 249–​262. Malaia, Evie & Ronnie B. Wilbur. 2018. Visual and linguistic components of short-​ term memory: Generalized neural model (GNM) for spoken and sign languages. Cortex 112. 69–​79. Mann, Wolfgang, Chloe R. Marshall, Kathryn Mason, & Gary Morgan. 2010. The acquisition of sign language:  The impact of phonetic complexity on phonology. Language Learning and Development 6(1). 60–​86. Marschark, Marc & Thomas S. Mayer. 1998. Interactions of language and memory in deaf children and adults. Scandinavian Journal of Psychology 39(3). 145–​148. Marschark, Marc, Thomastine Sarchet, & Alexandra Trani. 2016. Effects of hearing status and sign language use on working memory. Journal of Deaf Studies and Deaf Education 21(2). 148–​155. Marschark, Marc, Linda J. Spencer, Andreana Durkin, Georgianna Borgna, Carol Convertino, Elizabeth Machmer, William G. Kronenberger, & Alexandra Trani. 2015. Understanding language, hearing status, and visual-​spatial skills. Journal of Deaf Studies and Deaf Education 20(4). 310–​330. Marshall, Chloë, Anna Jones, Tanya Denmark, Kathryn Mason, Joanna Atkinson, Nicola Botting, & Gary Morgan. 2015. Deaf children’s non-​verbal working memory is impacted by their language experience. Frontiers in Psychology 6(527). Marshall, Chloë, Wolfgang Mann, & Gary Morgan. 2011. Short-​term memory in signed languages: Not just a disadvantage for serial recall. Frontiers in Psychology 2(102). Miozzo, Michele, Anna Petrova, Simon Fischer-​Baum, & Francesca Peressotti. 2016. Serial position encoding of signs. Cognition 154. 69–​80.

682

683

Working memory in signers Miyake, Akira & Priti Shah. 1999. Toward unified theories of working memory: Emerging general consensus, unresolved theoretical issues, and future research directions. In Akira Miyake & Priti Shah (eds.). Models of working memory: Mechanisms of active maintenance and executive control, 442–​481. Cambridge: Cambridge University Press. O’Connor, Neil & Beate M. Hermelin. 1973. The spatial or temporal organization of short-​term memory. Quarterly Journal of Experimental Psychology 25(3). 335–​343. Oberauer, Klaus. 2002. Access to information in working memory: Exploring the focus of attention. Journal of Experimental Psychology: Learning, Memory & Cognition 28(3). 411–​421. Owen, Adrian M., Kathryn M. McMillan, Angela R. Laird, & Ed Bullmore. 2005. N-back working memory paradigm:  A meta-analysis of normative functional neuroimaging studies. Human Brain Mapping 25(1). 46–​59. Page, Michael P.A. & Dennis Norris. 1998. The primacy model: A new model of immediate serial recall. Psychological Review 105(4). 761–​781. Papagno, Costanza & Carlo Cecchetto. 2018. Is STM involved in sentence comprehension? Cortex 112.  80–​90. Pierce, Lara J., Fred Genesee, Audrey Delcenserie, & Gary Morgan. 2017. Variations in phonological working memory: Linking early language experiences and language learning outcomes. Applied Psycholinguistics 38(6). 1265–​1300. Pisoni, David B., William G. Kronenberger, Suyog H. Chandramouli, & Christopher M. Conway. 2016. Learning and memory processes following cochlear implantation: The missing piece of the puzzle. Frontiers in Psychology 7(493). Repovš, Grega & Alan D. Baddeley. 2006. The multi-​ component model of working memory: Explorations in experimental cognitive psychology. Neuroscience 139(1). 5–​21. Rönnberg, Jerker, Thomas Lunner, Adriana Zekveld, Patrik Sörqvist, Henrik Danielsson, Björn Lyxell, Örjan Dahlström, Carine Signoret, Stefan Stenfelt, M. Kathleen Pichora-​Fuller, & Mary Rudner. 2013. The Ease of Language Understanding (ELU) model: Theoretical, empirical, and clinical advances. Frontiers in Systems Neuroscience 7(31). Rönnberg, Jerker, Mary Rudner, & Martin Ingvar. 2004. Neural correlates of working memory for sign language. Cognitive Brain Research 20(2). 165–​182. Rudner, Mary. 2015. Working memory for meaningless manual gestures. Canadian Journal of Experimental Psychology 69(1). 72–​79. Rudner, Mary. 2018. Working memory for linguistic and non-​linguistic manual gestures: Evidence, theory and application. Frontiers in Psychology 9(679). Rudner, Mary, Josefine Andin, & Jerker Rönnberg. 2009. Working memory, deafness and sign language. Scandinavian Journal of Psychology 50(5). 495–​505. Rudner, Mary, Lena Davidsson, & Jerker Rönnberg. 2010. Effects of age on the temporal organization of working memory in deaf signers. Aging, Neuropsychology, and Cognition 17(3). 360–​383. Rudner, Mary, Gitte Keidser, Staffan Hygge, & Jerker Rönnberg. 2016. Better visuospatial working memory in adults who report profound deafness compared to those with normal or poor hearing: Data from the UK Biobank resource. Ear and Hearing 37(5). 620–​622. Rudner, Mary & Jerker Rönnberg. 2008. Explicit processing demands reveal language modality-​ specific organization of working memory. Journal of Deaf Studies and Deaf Education 13(4). 466–​484. Sehyr, Zed Sevcikova, Jennifer Petrich, & Karen Emmorey. 2017. Fingerspelled and printed words are recoded into a speech-​based code in short-​term memory. Journal of Deaf Studies and Deaf Education 22(1). 72–​87. van Dijk, Rick, Ingrid Christoffels, Albert Postma, & Daan Hermans. 2012. The relation between the working memory skills of sign language interpreters and the quality of their interpretations. Bilingualism: Language and Cognition 15(2). 340–​350. Wang, Jihong & Jemina Napier. 2013. Signed language working memory capacity of signed language interpreters and deaf signers. Journal of Deaf Studies and Deaf Education 18(2). 271–​286. Watkins, Michael J., Denny C. LeCompte, Marc N. Elliott, & Stanley B. Fish. 1992. Short-​ term memory for the timing of auditory and visual signals. Journal of Experimental Psychology: Learning, Memory and Cognition 18(5). 931–​937. Wilson, Margaret. 2001. The case for sensorimotor coding in working memory. Psychonomic Bulletin & Review 8(1). 44–​57.

683

684

Marcel R. Giezen Wilson, Margaret, Jeffrey G. Bettger, Isabela Niculae, & Edward S. Klima. 1997. Modality of language shapes working memory. Journal of Deaf Studies and Deaf Education 2(3). 150–​160. Wilson, Margaret & Karen Emmorey. 1997. A visuospatial ‘phonological loop’ in working memory: Evidence from American Sign Language. Memory and Cognition 25(3). 313–​320. Wilson, Margaret & Karen Emmorey. 1998. A ‘word length effect’ for sign language: Further evidence for the role of language in structuring working memory. Memory and Cognition 26(3). 584–​590. Wilson, Margaret & Karen Emmorey. 2006a. Comparing sign language and speech reveals a universal limit on short-​term memory capacity. Psychological Science 17(8). 682–​683. Wilson, Margaret & Karen Emmorey. 2006b. No difference in short-​term memory span between sign and speech. Psychological Science 17(12). 1093–​1094. Wilson, Margaret & Glenn Fox. 2007. Working memory for language is not special: Evidence for an articulatory loop for novel stimuli. Psychonomic Bulletin & Review 14(3). 470–​473.

684

685

INDEX

Note: Page numbers in italics refer to figures; numbers in bold refer to tables. ABSL see Al-​Sayyid Bedouin Sign Language (ABSL) acquisition see language acquisition Action Role Shift (AcRS) 354–​356, 367, 510–​511, 520, 521–​522, 524, 526–​527n18 adverbials: acquisition of 256, 550; agent-​ denoting 156, 161; in elliptical clauses 301; of manner/​intensity 196; manual 415; modification of 303; negative markers as 202, 269; non-​manual 160, 207, 256, 549, 550; postnominal index as 409; quantifiers 424, 426; in relative clauses 329; sentential 492; temporal 196–​198, 329, 333, 342, 361 age of acquisition (AoA) 40, 46, 55, 57, 58, 60, 82, 676 agreement: acquisition of 125, 126; auxiliaries 100–​101, 101, 106, 110, 113; in BSL 125; co-​ localization 114; in DGS 125, 126; in HKSL 125; in Libras 125; in TSL 100 agreement markers/​morphology 96–​97, 100, 102, 113, 114, 129, 149, 356, 357, 396n2, 511; classifiers as 146–​149, 168; gender agreement in 149; grammatical 356, 357; non-​manual 219, 556; principles of 103–​104; spatial 652–​626; see also verb agreement, verbs Aktionsart see lexical aspect (Aktionsart/​event structure) allomorphy 1, 19–​22, 26, 28n31 alloshapes 5–​6 Al-​Sayyid Bedouin Sign Language (ABSL): compared to homesigns 657; as emerging sign language 647–​656; future directions for study 655–​656; morpheme grammaticalization in 199; prosodic and

symbolic features in 85; spatial agreement in 653; word order in 648, 652 ambiguity 406, 434, 441, 445, 462–​463, 501–​502, 504; in ASL 467; in LSC 434–​435; scope 424, 434–​435 American Sign Language (ASL): acquisition of 40, 125, 256–​257, 315–​317, 393, 445, 607; ambiguity in 467; argument structure alternations in 155; backwards verbs in 114; and the BiBiBi project 619; and bimodal bilingualism 626; bound markers of aspect in 202–​203; brain activation in 53; causatives in 152; classifiers (constructions/​ predicates) in 139, 144, 153, 158–​161, 167, 381, 388; coarticulation in 41; code-​blending studies 624; conjunction/​disjunction in 448–​450, 449; content interrogatives in 233, 234; coordination strategies 442; correlative clauses in 337–​338; crossover effects in 460–​461; default plural loci in 513; definiteness in 411; deontic modals in 415; discourse anaphora in 458; as discourse-​ configurational language 559n5; discourse markers in 490; disyllabic monomorphemic signs in 72; doubling in 603; experimental investigations of aspect in 208; externally-​ headed relative clauses in 332; focus marking in 594–​595; free aspectual markers in 200–​201; use of gesture in 572, 578, 585; grammatical processing in 394; handshapes in 6, 8, 49, 73, 74, 139, 140–​141, 162, 405, 430, 454, 512, 518; high and low loci in 515, 518; iconicity in 23, 24, 470, 476, 459, 511, 512–​513, 518, 524, 577–​578, 580, 585; implicatures in 441–​446; indefinite

685

686

Index determiners in 406; indefinite pronouns in 407; indexical points in 220–​221; internally-​ headed relative clauses in 328, 330; intonational phrases in 77; and left-​ hemisphere activation 48; left periphery in 597; lexical processing in 60; loci as variables in 502–​503; loci in 523; markers of domain restriction 416; markers of event structure in 206–​207; modal meaning in 492; mouth gestures in 75; multiple wh-​questions in 250–​251; negation in 271–​273, 274, 276–​277, 278; non-​dominant hand spread in 76; non-​ manual features 79; non-​manual markers in 249–​250, 538–​539, 555; non-​manual verb agreement in 101–​102; non-​specificity in 408; noun-​verb pairs in 213–​214; and the NP/​DP parameter 217–​218; null arguments in 149, 305, 310; order of signs in noun phrase 409; particles in 491; phonological comprehension in 34, 37; possessives in 225–​226; priming studies 128–​129; pronoun use in 460–​463, 470, 504; prosody in 81; quantification in 425, 426, 428, 436; question-​answer pairs 602; reference tracking in 310; relative clauses in 326, 339; response particles in 485–​487; role shift in 353, 358, 360, 510–​511, 522, 526–​527n18; scalar implicatures 448–​453; scalar words in 445; scope ambiguities in 434; sentence processing in 258; serial recall in 671, 672; and short-​term memory 665; use of sign space in 383; sluicing in 304; spatial loci in 446, 447, 448; subordinate clauses in 345; syncretisms in 469; syntactic processing studies 129; syntax-​prosody interactions in 545; test battery for 175; topic marking in 592, 593; and ToT/​ToF effects 52; turn-​ taking signals in 482; variable visibility in 506; verb agreement in 22, 97, 107, 357; and visible event decomposition 512; wh-​ movement in 104, 244, 252; word order patterns in 223; working memory studies 677 anaphora: backward 302; complement set 513, 515; cross-​sentential 473–​474, 476; discourse 458–​477, 488; donkey 474; in embedded clauses 352; e-​type 344, 473–​474; maximal set 513; modal 505; nominal 504–​505; patterns of 505; resolution 459; restrictor set 513; temporal 505; and use of pronouns 502–​504; zero 310, 312, 313, 320; see also discourse anaphora answering systems: polarity-​based 485–​488; truth-​based 485–​488 aphasia 48, 59, 288, 580 A-​quantifiers 423–​424, 426–​428, 433, 436; see also quantifiers Argentine Sign Language (LSA) 100, 156

argument structure 103, 127, 128, 140–​142, 146, 152–​153, 157–​159, 162, 165, 168, 174, 195, 206–​207, 380, 396n2, 426, 581, 652; alternations 155–​157; in ASL 155 articulation: co-​ 1, 20, 21, 41, 407; double 23; holding/​pausing during 545; of manual signs 36, 74–​76, 407, 493; of non-​manual markers 545, 555, 558; phonetic principles of 2; place of 8–​10, 14, 34, 35, 72, 114, 151, 152, 276, 411; rate /​speed of 34, 608, 666; of sign aspect 23; simultaneous 27n18, 256; in spoken languages 27n21, 77, 598; topic/​focus 431, 444; type of 545; see also articulators articulator node 8, 14, 27n22, 154 articulators: and the agreement principle 129; bodily 572; configuration of 56; hands as 8, 12–​13, 256, 576, 630; head as 249, 545, 555; linguistic 355, 368; location of 10, 14, 15; multiple 418, 494; position of 207; size of 34; studies of 86; sub-​ 14, 15, 27n14, 27–​28n26; see also articulation; articulator node articulatory rehearsal mechanism 665 artificial language studies 640–​641 Asian sign languages 6; see also Beijing Sign Language; Chinese Sign Language; Hong Kong Sign Language (HKSL); Indian Sign Language (IndSL); Indo-​Pakistani Sign Language (IPSL); Iranian Sign Language; Japanese Sign Language (JSL); Tianjin Sign Language (TJSL); Taiwan Sign Language (TSL) ASL see American Sign Language (ASL) aspect 194–​209; in ASL 202–​203, 208; bound markers of 202–​205, 204; continuous 199, 200, 202, 209n2; event structure and reference time representation 205–​208; experimental perspectives 208; free aspectual markers 200–​202; grammatical aspect 198–​199; habitual 202, 209n2, 427, 432; imperfective 199, 201, 203; in GSL 100, 200, 202; in HZJ 199–​201, 203–​205, 204, 208, 220; in IPSL 200, 203; iterative 202–​203, 209n2; in LSE 203; in NicSL 203; perfective 199, 200–​202; in SSL 203; see also grammatical (viewpoint) aspect; lexical aspect (Aktionsart/​event structure) assimilation 1, 8 Attitude Role Shift (AtRS) 354–​356, 360, 362, 364, 508–​509, 510, 520, 521, 524, 526–​527n18; in LSC 508–​509 Australian Sign Language (Auslan): classifier acquisition in 175–​176, 177; determiner phrases in 214; use of gesture 179; lexical processing in 53; working memory in 672 Austrian Sign Language (ÖGS): markers of event structure in 207, 207; modal meaning

686

687

Index in 492; negation in 270; non-​manual markers in 415; turn-​taking signals in 482 autism 451 backchannels 484, 485 basic units see phonological parameters Beijing Sign Language 139 BiBiBi project see Binational Bimodal Bilingual (BiBiBi) project bilingualism 614–​615; bimodal 59–​61; unimodal 59, 614; see also bilinguals; bimodal bilinguals bilinguals: cognitive advantage of 61; M2L2 60; unimodal 61; see also bimodal bilinguals bimodal bilinguals 46, 55, 59–​61, 614–​631; and ASL 626; children as 257; and code-​ blending 623–​630; cross-​language activation 621–​623; cross-​linguistic influence 617–​621; Deaf children of Deaf parents with cochlear implants (DDCI) 318–​319; development of 616–​621; and DGS 622; and LIS 626, 628, 631n6; and null argument grammar 317–​320, 620–​621; separation of languages in 616–​617; simultaneity and blending 621–​630; word order production of 619 Binational Bimodal Bilingual (BiBiBi) project 617–​620, 629; and ASL 619; and Libras 619 binding: between antecedent topic and null category 297; blocking of 299; conditions of 461, 470; cross-​sentential 458, 473, 488; dynamic 458, 468, 472, 488, 505; high and low loci under 516; local 460; non-​local 460; pronoun 466; quantificational 336, 427, 458, 468, 488, 507; in spoken languages 500, 504; variable 329, 343 Binding Theory 460, 464 biphasic electro-​physiological response 131 blinks 77, 80, 84, 88, 89n1, 326, 339, 341, 346n23, 483, 551, 554, 559; in JSL 77; see also eye articulations body movements 47–​48; backward lean 598; body posture 339; body shift 508; leaning 356–​357, 415, 484, 492 bodypart classifiers (BPCL) 141, 153–​154, 155, 158–​159, 162, 166, 168–​169, 170n16, 370–​371 boolean elements 600 boundary markers/​marking 80, 82, 84, 494, 551, 559 brain activation: in ASL 53; left hemisphere 48, 53, 57, 86, 183–​186, 188, 190, 287–​288, 385, 386, 388, 395, 557, 579 Brazilian Sign Language (Libras): acquisition of 257; agreement acquisition in 125; and the BiBiBi project 619; classifier acquisition in 175–​176, 177; code-​blending studies 624; content interrogatives in 233; definiteness

in 412; determiner phrases in 220; doubling in 603, 604; externally-​headed relative clauses in 332; late learners of 607–​608; modal meaning in 492; negation in 273; topicalization in 607–​608; verb agreement in 104, 106, 114–​116; wh-​doubling in 241–​242 British Sign Language (BSL): agreement acquisition in 125; atypical signers of 581, 582; classifier acquisition in 177; discourse markers in 485–​488, 489; free aspectual markers in 200; use of gesture in 572, 575, 585; handshape inventory 179–​180; indicating verbs in 380; lexical processing in 48, 52, 56, 57; native signers of 393; neurolinguistic studies 184–​186; neurolinguistic studies 86, 581–​582; neurophysiology of negation in 287–​290; non-​manual markers in 405, 557; null argument studies 318–​320; possessives in 225; phonological comprehension in 37; sign recognition in 56; spatial language study 386; working memory in 673 brow movements 70, 80, 81, 83, 84, 250, 492, 555, 568; in ASL 540–​545; furrowed (see also lowered) 78, 79, 254, 255, 256, 258, 287, 415, 432, 492; lowered (see also furrowed) 248, 249, 250, 252, 254, 255, 291n7, 406, 408, 410, 519, 526n17, 533–​534, 541, 543, 544, 545, 547, 548, 552, 554, 557, 560n9; and negation 291n7; neutral 547; raised 77–​78, 78, 79, 80, 83, 87, 116, 156, 248, 249, 250, 253, 254–​255, 260n11, 271, 291n7, 298, 326, 328–​329, 333, 334, 337, 339, 344, 345, 346, 346n14, 408, 410, 413, 432–​433, 484, 491, 493, 494, 505, 526n7, 531, 532, 533–​534, 540–​545, 547, 548–​549, 551, 552, 554, 557, 558, 560nn8–​9, 592, 594, 595, 598, 607, 609n7; slanted 254 BSL see British Sign Language (BSL) buoys 490; and information structure 604–​606 cartographic syntax 549 Catalan Sign Language (LSC): Attitude Role Shift in 508–​509; backwards verbs in 114; classifiers in 156; collocation between specificity and domain restriction 416–​417; definiteness in 412; distributive morphology in 435; focus marking in 599–​600; indefinite determiners in 406; indefinite expressions in 435; indefinite pronouns in 406–​407, 413–​414; iconicity in 54; internally-​headed relative clauses in 328; L2 learners of 320–​321; markers of verbal plurality 429; mixing of perspectives in 510; morphological markers in 428; negation in 269–​270; non-​ manual markers in 538; non-​specificity in 408, 408; partitive constructions in 417, 417; possessives in 225; pronoun use in 462–​463;

687

688

Index prosody in 88; quantification in 425, 427, 432; role shift in 352, 526–​527n18; scope ambiguities in 434–​435; subordinate clauses in 345; verb agreement in 96, 96–​99, 98, 104, 105, 106; wh-​movement  in  246 Categorical Perception 34–​36, 40, 42 causatives 141, 152, 153, 157; in ASL 152; in HKSL 152 Chain Reduction 603–​604 character viewpoint (CVPT) 392 cheeks, puffed /​raised 549, 550; see also facial expressions cheremes 13; see also phonemes children of deaf adults (CODAs) 60, 552–​553, 615, 616, 631n1; see also bimodal bilinguals Chinese Sign Language 2 chin positions: back 547; down 410, 547; forward 547; up 547, 258 classifiers: acquisition of 175–​179; affix 141; agreement analysis and argument structure 152–​161; as agreement markers 146, 149; analyses in terms of agreement 146–​161; analyses in Distributed Morphology 146–​152; in ASL 139, 144, 153, 158–​161, 167, 381, 388; in Auslan 175–​176, 177; bimanual 186; bodypart 141, 153–​154, 155, 158–​161, 162, 166, 168–​169, 170n16, 370–​371; in BSL 177; constructions 179–​180, 387–​388, 518–​519; cross-​linguistic variation 158–​161; depth and width 140; determining argument structure 156; in DGS 144, 146, 170n14, 392; in DSL 142; entity 175, 392; experimental perspectives 174–​191; extension 159, 169n10; extent 140–​141; gesture and classifier constructions 179–​180; handle/​ handling 140–​141, 153, 154–​155, 161, 163, 164, 170n13, 195, 392; handshapes of 139, 179–​183; as heads of functional projections 153–​157; in HKSL 139, 157, 158–​159, 160, 168–​169, 169n12, 176; instrumental 140–​144, 168, 195; in ISL 101, 142–​143; in Libras 175–​176, 177; limb 141; in LSC 156; in LSF 166–​167; morphosyntactic properties of 139–​140; neurolinguistic studies 183–​188; noun incorporation analysis 142–​145; perimeter-​shape 140; as phonological units 24; predicates 149–​151, 150, 158–​161, 164–​168, 379–​380, 381, 382, 394, 454; projection of verbal classifier phrase 152–​153; psycholinguistic studies 180–​183; in RSL 170n14; semantic 140, 141, 159, 430; semantic analysis of classifier predicates 164–​168; size and shape specifiers (SASSes) 140, 141; surface 140; on-​surface 140; syntactic structure of classifier predicates 162–​163; in TİD 392; theme 141, 142–​144; theoretical perspectives 139–​169; in TJSL

139, 157, 158–​159, 168–​169, 169n12; tool 141; transitive-​transitive alternation based on instrumental classifiers 157–​158; typology of 140–​141; verbal 146; verb root/​stem analysis 141–​142; whole entity 140–​141, 152, 153, 155, 158–​161, 163, 370, 430; see also morphemes class nodes 14, 15, 16, 19, 25, 26 clefted constructions 252; cleft-​like 594, 601, 608, 609n2; wh-​ 233, 254–​255, 260n11, 433, 531, 541, 544, 598, 601, 602, 608, 609n2 clefted question analyses 242–​243 clitic analysis of agreement 116–​117 cliticization 108 clusivity, encoding of 435 clustering: phonological 52; semantic 52 coalescence 74, 74 co-​articulation 1, 20, 21, 41, 407; in ASL 41; coarticulatory effects 41; see also articulation cochlear implants (CI) 318; children with 615–​616, 670 code-​blending 60, 623–​630; in ASL 624; classifications of types 624–​627, 627; correlations 627–​628; in FinSL 624; and the Language Synthesis model 629–​630; in Libras 624; in LIS 624 code-​switching 624, 630 cognitive development 176, 286, 290, 451, 453 co-​localization, agreement as 114 Complementizer Phrase (CP) 269, 595 compositionality 1, 2, 22, 250, 532 compound root 151 comprehension: in BSL 37; of classifier constructions 17–​179; of conditionals 551; errors in 185; gesture 48, 571; language 37, 39, 40, 41, 42, 89, 671, 679; lexical processing 44–​45, 48, 49, 55–​58; linguistic experience 33, 36–​3; phonological 33–​42; of prosody 70; reading 671, 677; sign language 41, 45, 47, 394, 677–​678; signs 37, 46, 48, 49–​50, 54, 55, 58; spans 678; of spatial syntax 184, 186, 189, 391; spoken language 371, 678 conditionals 79, 80, 256, 288, 533; counter-​ factual 78, 339 conjunction(s) 441, 442, 444, 446, 448–​450, 531, 543, 549, 551, 546, 597; in ASL 448–​450, 449 constraints: context-​sensitive 3–​4; final-​over-​ final 267–​268; language-​specific 2; one-​ hand 6–​7; one-​location 8; one-​movement 11; Possible Word Constraint (PWC) 37; shift-​together  360 content interrogatives 232–​259; acquisition of 256–​257; in ASL 233, 234; doubling constructions 234–​237; embedded 252–​255; experimental perspectives 255–​258;

688

689

Index in a homesign system 257–​258; and the leftward/​rightward controversy 234–​237; in Libras 233; in LIS 233; multiple wh-​ questions 250–​252; non-​manual marking in 248–​250; positions of interrogative signs 233–​246; processing of mismatches 258; question particles as clause-​typers 246–​248; theoretical perspectives 232–​255; in young sign languages 258; see also interrogatives; wh-​questions context shift 166, 500, 508–​509, 520–​522, 524; theories of 510–​511 context-​shifting operators 166–​167, 359–​363, 508–​509 contrast, markers of 599–​601 conventionalization 179, 190 co-​reference processing 383–​384 correlative clauses 325, 326, 336–​338, 341, 344, 345; in ASL 337–​338; in LIS 337; see also relative clauses Corsi blocks 673, 675 creole languages 573, 637, 641–​642 Croatian Sign Language (HZJ): aspect in 199, 220; bound markers of aspect in 203–​205, 204; experimental investigations of aspect in 208; free aspectual markers in 200–​201; markers of event structure in 206; negation in 273–​274 cross-​linguistic activation 60, 621–​623 cross-​linguistic influence 617–​621 cross-​modal plasticity  46–​47 crossover 460–​461, 464, 504; in ASL 460–​461; in LIS 460 Danish Sign Language (DSL): classifiers in 142; discourse markers in 489, 490; markers for definiteness 407; order of signs in noun phrase 409; role shift in 360; turn-​taking signals in 482, 483 declaratives 77, 255 decoupling principle 365–​366 definiteness 403, 405; in ASL 411; in DSL 407; in DGS 408; in ISL 407; in Libras 412; in LSC 412; markers for 220, 405–​407, 408; non-​manual markers for 407–​408, 418; theoretical study of 418; types of 412; and uniqueness 404; weak 412 definite noun phrases (NPs) 403 demonstration 164–​166, 454; action 165; context-​dependent 165; language 165 deontic modals 415, 492; in ASL 415 Dependency Phonology 5 determiner phrases (DPs) 213–​226; in Auslan 214; building 215–​222; categorical status of pointing signs 219–​222; in HKSL 220; in Libras 220; in LIU 214; in NSL 214; possessives 225–​226; sign languages and the

NP/​DP parameter 216–​219; in TİD 214; word order patterns 222–​225 determiners: indefinite 406; in LSC 406; lexical 405–​409, 418 DGS see German Sign Language (DGS) directionality 95–​96, 123–​124, 125, 131; in personal pronouns 126–​127; and verb agreement 135 discourse anaphora 458–​477, 488; in ASL 458; dynamic semantics 471–​476; and the encoding of space 464–​466; loci as variables or features 466–​468; modal meanings of 481, 491–​494; pictorial loci 470–​471; and the semantics of pronouns 461–​464; spatial syncretisms 468–​470; and the syntax of pronouns 460–​461 discourse cohesion 396 discourse markers 482, 489–​490, 489, 495n4; in ASL 490; in BSL 485–​488, 489; in DSL 489, 490; in DGS 489, 489, 493; in NSL 489; in NZSL 489 discourse particles 480–​495; functions of in sign languages 481 discourse regulation 481–​488 Discourse Representational Theory (DRT) 397n6, 468 discrimination tasks 35, 183 disjunction 441, 442, 444, 446, 448–​450; in ASL 448–​450, 449 Distributed Morphology (DM) 145, 147–​148, 151–​152, 168, 257 disyllabic morphemic signs 72; in ASL 72 domain restriction 416–​417 dominance: condition 12, 227n4, 618; left hand 580; multi-​ 251, 605, 606; reversal 599 donkey sentences 427, 473 double articulation 23; see also patterning doubling 143–​144, 169n4, 251, 252, 260n12, 260n14, 491, 594, 598; in ASL 603; with content interrogatives 234–​237; in DGS 604; focus 236–​237, 607, 608; and information structure 603–​604; int-​sign 233, 253, 257; locus 383–​384, 394; in Libras 603, 604; modal 542; negative 273; in RSL 603; sentence-​final 274; subject clitic 116; wh-​ 234, 241–​242, 243, 619; wh-​doubling in Libras 241–​242; wh-​doubling in LIS 243 D-​quantifiers 423–​426, 432, 436; see also quantifiers DSGS see Swiss German Sign Language (DSGS) DSL see Danish Sign Language (DSL) dyadic operators 559 dynamic binding 458, 468, 472, 488, 505 dynamic theories 459

689

690

Index Effort Code 598 electroencephalography (EEG) 46, 128, 131; see also electrophysiological studies electrophysiological studies 50, 53, 132; see also electroencephalography ellipsis 520–​521; analysis of null arguments 304–​306; in restrictive relative clauses 340; sluicing 304; VP ellipsis 296, 301–​304 embedded clauses see subordinate clauses embedded content interrogatives 232–​233, 250, 252–​255, 299, 601; as complement clauses 252–​254; in rhetorical questions 254–​255 embodiment 650; motor 391 emerging sign languages 383, 566, 636, 647–​656; see also Al-​Sayyid Bedouin Sign Language (ABSL) emphatic focus see focus marking end-​marking (telicity) 162–​163, 197, 198, 202, 205, 206, 208, 215, 512, 524 entailments 197–​198, 443, 446 epistemicity 405, 412–​416, 491 epistemic modality 415 epistemic specificity 413–​414 errors: phonological 57; reversibility 126; sign error corpora 58; slip of the tongue/​hand 52; in spatial locations and movements 189; studies of 2; substitution 59 events: decomposition of 512; modification 165; structure 194–​198, 205–​208; visibility 519–​520; see also lexical aspect (Aktionsart/​ event structure) event related potential (ERP) 47, 51, 55, 131–​132, 134–​135, 386–​387; studies on sign language agreement 131–​135; and verb agreement violations 134–​135 event visibility: in LSF 520 Event Visibility Hypothesis 162, 205–​206, 215, 512, 520, 524–​525 evidentiality 491 exclamatives 257–​258 executive control functions 61, 669 exhaustivity operator 255, 444 eye articulations 549, 568; blinks 77, 80, 84, 88, 89n1, 326, 339, 341, 346n23, 483, 551, 554, 559; squinting 78, 326, 328–​329, 339, 344, 345, 346, 407, 410, 548–​549; eyes widen 83, 531; in JSL 77; see also eye gaze eyebrows see brow movements eye contact 483, 495n1 eye gaze 101, 102, 130, 218–​219, 326, 356–​357, 373n4, 406, 408, 481, 483; change in 551; for checking comprehension 557; eye-​tracking of 556–​557; as focus marker 557; for marking role shift 557; and null arguments 299–​301; in null arguments 314; for object agreement 533; and point of view 557;

shift in 508; for turn-​taking 557; for verb agreement 533, 556–​557 eye-​tracking studies 102, 129–​131, 300, 314; of non-​manual markers 556–​557 facial action coding system (FACS) 559n1 facial expressions 190, 289, 326, 364, 526n17, 549, 550, 568; affective 288; brow movements 77–​78, 78, 79, 84, 87; cheeks, puffed/​raised 549, 550; gestural 356; happy 521; masking 86–​87; neural processing 557; nose wrinkle 258, 337, 408; while singing 78; see also brow movements; eye expressions; mouth gestures, mouthings Fast Speech Phenomena 535 feature geometry 4 Final-​Over-​Final Constraint (FOFC) 267–​268; other distributions of negation 273–​275; and SOV sign languages 268–​270; and SVO sign languages 270–​273 Finger Configuration 5–​6, 5 fingers: flexion of 5; wiggling 483, 484, 560n8 Finger Selection 5–​6, 5 fingerspelling 72; Chinese 6 Finnish Sign Language (FinSL): code-​blending studies 624; null arguments in 304; topic marking in 593; wh-​movement  in  246 Flemish Sign Language (VGT) 482 flip gestures 257–​258 fluency tasks 52 focus: contrastive 602, 607; focus particles 495n5; prosodic marking in ASL 608; and prosodic prominence 598; see also focus marking focus marking 594–​595, 607–​608; in ASL 594–​595, 608; in DGS 594, 598–​599; in LSC 599–​600; in LSF 594; prosodic 608; in RSL 594–​595, 599 free aspectual markers 200–​202; in ASL 200–​201; in BSL 200; in DGS 200; in ISL 200–​201; in LIS 200; in TİD 200 free relatives 335 French Belgian Sign Language (LSFB) 490 French Sign Language (LSF): classifiers in 166–​167; event visibility in 520; focus marking in 594; loci in 502–​503, 516, 523; markers of verbal plurality 429; non-​manual markers in 560n9; pronoun use in 462–​463, 470–​471; quantification in 436; question-​ answer pairs 602; role shift in 353, 510–​511, 526–​527n18; spatial language study 386; working memory in 671 frequency effect 50 functional magnetic resonance imaging (fMRI) 46, 57, 128, 131, 185, 580, 582 functional near-​infrared spectroscopy (fNIRS) 46, 61 functional projections 549

690

691

Index gating studies 49 generative syntactic analyses 108–​116 German Sign Language (DGS): acquisition of prosody in 82: agreement acquisition in 125, 126; and bimodal bilingualism 622; classifiers in 144, 146, 170n14; discourse markers in 489, 489, 493; doubling in 604; externally-​headed relative clauses in 332, 334; focus marking in 594, 598–​599; free aspectual markers in 200; of handling classifiers 392; indefinite pronouns in 407; left periphery in 597–​598; locative expression in 388–​389, 389; markers for definiteness 408; mixing of perspectives in 510; modal meaning in 492, 493–​494; negation in 269–​270, 279–​281, 287; non-​manual markers in 538; particles in 491; pronoun use in 462, 468–​469; reciprocal signs in 22; reference tracking in 310; relative clauses in 332, 334, 339, 341; response particles in 485–​488; role shift in 354–​355, 355, 356, 360, 363, 365, 369–​370, 369, 370, 371, 371, 526–​527n18; root compounds in 152; syncretisms in 469; syntactic violations in 386; topic marking in 593; turn-​taking signals in 482; verb agreement in 100, 105, 106, 112, 123, 134, 147 gestures: action primes for 571; adverbial 569; articulatory 33, 34; in ASL 572, 578, 585; in Auslan 179; in BSL 572, 575, 585; character viewpoint strategy 392; and classifier constructions 179–​180, 189–​190; comprehension of 584; contrasted with sign processing 581–​583; co-​sign 23; co-​speech 382, 448, 523, 566, 569, 573, 584; defined 567; depicting motion events 578–​579; flip 257–​258; forms and functions of 567–​568; functions of 567–​568; in homesign systems 84, 644; iconic 572; interactional 568; lexicalization of 567; for modal meaning 494; relationship to speech 570, 570; representational (iconic) 568; role of in language processing 569–​572; and role shift 363–​370; semantic priming effects of 571; in sign languages 190, 577–​579, 583–​585; and sign processing 47–​48, 566–​567, 583–​585; silent 382; and speech comprehension 571–​572; and speech production 569–​570; in spoken languages 365, 572; spontaneous 88; and verb agreement 95 gradient expressions 139, 140, 164, 169n1, 180–​183, 188, 189, 387, 388, 394, 396, 454–​455, 518–​520, 584, 585n1 grammatical (viewpoint) aspect 198–​199; bound markers of aspect 202–​205; free aspectual markers 200–​202;

inventory of 199; in sign languages 199–​205 grammaticalization/​grammaticalize 22, 100, 106, 124, 199, 200, 219, 258, 344, 405, 492, 494, 551, 560n9, 567, 602 Greek Sign Language (GSL): aspect in 100, 200, 202; negation in 276; verb agreement in 100 Grice, H. Paul 440–​441, 444, 453 GSL see Greek Sign Language (GSL) hand movements: in ASL 76; drop-​hands 82; facing 96; flip gestures 257–​258, 286; indexical point 216, 219–​222, 225–​226, 227n4, 227n7, 227n8; insult gesture 26n10; internal 5, 5, 7; indicating location 380; in ISL 75, 76; non-​dominant hand spread 75, 76; palm-​down 490; in PJM 220–​221; in RSL 76, 220; thumb positions 5; tremoring movement 406; in TSL 220; wrist movement 406; see also palm-​up (manual flip) hands: configuration of 8, 35, 169n6; configuration for masculine/​feminine referents 6, 169n6; flexion of the fingers 5; holds 605–​606; interaction features 13; in JSL 6, 196n6; orientation of 103–​104; in RSL 605–​606; sub-​articulators of 14 handshapes 4–​8, 19, 51, 56, 57, 58, 59, 72, 81, 152, 158, 179; abrupt stop 512; in ASL 6, 8, 49, 73, 74, 139, 140–​141, 162, 405, 430, 454, 512, 518; in BSL 179–​180; changes in 6, 512; of classifiers 139, 179–​183; as classifier element 380; complex 6; discrete morphological 454; encoding 396, 397n7; finger configuration 4; finger position unit 4; finger selection 4; selected finger units 4; and semantic clusters 52; in TJSL 139; unmarked 12; use for quantification 430–​431 Handshape Sequencing Constraint 73, 73 Hand Tier Model 17–​18, 72 head movements/​position 84, 88, 344, 356–​357, 373n4, 549, 568; backward 250, 539; change of 547; forward 79, 539; in Japanese Sign Language 545–​547, 555; nodding 326, 341, 484, 492, 493, 547, 554, 555, 604; pulls 555, 556; raised 551; reasons for 558; shake 250, 275, 277, 279, 282–​285, 286, 287, 291n8, 492, 531, 538–​539, 540, 547, 557, 598; thrust 547, 551, 555, 556; see also chin positions; head tilt head tilt 101, 130, 218–​219, 250, 256, 326, 415, 481, 484, 493, 551; backwards 79, 275–​276, 410; and null arguments 299; as prompt for a response 539; for verb agreement 533 HKSL see Hong Kong Sign Language (HKSL) Hold-​Movement model  72 holds 82

691

692

Index homesigners 84–​85, 257 homesign systems 220, 566, 573; ABSL compared to 657; content interrogatives in 257–​258; creation of 643–​645; ISL compared to 657; negation in 284–​287; Nicaraguan 214, 649; NicSL compared to 657 Hong Kong Sign Language (HKSL): acquisition of 208; agreement acquisition in 125; analysis of predicates in 153; causatives in 152; classifier acquisition in 176; classifier handshapes in 139; classifiers in 157, 158–​159, 160, 168–​169, 169n12; determiner phrases in 220; externally-​headed relative clauses in 332, 334; free aspectual markers in 200–​201; indefinite determiners in 406; internally-​headed relative clauses in 328; markers for specificity 408; negation in 270–​271; prosody in 77, 88; subordinate clauses in 345; surrogate/​token space in 222; topic marking in 592, 593 hypernym 593 HZJ see Croatian Sign Language (HZJ) iconicity: in ASL 23, 24, 470, 476, 459, 511, 512–​513, 518, 524, 577–​578, 580, 585; and classifier constructions 518–​519; discrete 23; effects in role shift 520–​522; embedded loci 513–​515; and event visibility 519–​520; gradual 23–​24; and high loci 523–​524; iconic variables 512–​518; incidental discrete 24; as link between meaning and form 54–​55; in LSC 54; maximal 366–​367; maximization of 521–​522; and plural pronouns 523; recurrent discrete 24–​25; role of 573; and role shift 511, 524; in sign language production 580; in sign languages 577–​578; and telicity 524–​525 imperatives 82, 155, 196 implicatures 440–​455; in ASL 441–​446; acquisition of 445, 450–​453; based on modality 453–​454; experimental investigations of 445–​446; and formal pragmatics 440–​445; scalar 449–​450; theory of 440–​445; see also scalar implicatures incorporation 141, 142–​145, 163, 168–​169 indefiniteness 405 indefinite noun phrases (NPs) 403 indexical predicates 164 indexical pointing 216, 219–​222, 225–​226, 227n4, 227n7, 227n8; in ASL 220–​221; in IPSL 216; in ISL 216; in LIS 220; in PJM 220–​221; in RSL 220; in TİD 220; in TSL 220; see also pointing indexicals: and context-​shifting operators 359–​363; and role shift 509 indexing, spatiotemporal 675–​676 Indian Sign Language (IndSL): focus in wh-​questions 609n4; question particles in

246–​248; topic marking in 609n7; see also Indo-​Pakistani Sign Language (IPSL) indirect discourse 359, 361–​362 Indo-​Pakistani Sign Language (IPSL): bound markers of aspect in 203; free aspectual markers in 200; indexical pointing sign in 216; wh-​movement in 246; see also Indian Sign Language (IndSL) IndSL see Indian Sign Language (IndSL) inflection 8, 202; see also agreement markers; verb agreement information structure 535, 591–​609; buoys and related strategies 604–​606; contrast 599–​601; description and formalization 591–​592; doubling 603–​604; experimental research 606–​608; focus and prominence 598–​599; and the left periphery 595–​598; question-​ answer pairs 601–​602; in RSL 591; strategies for focus marking 594–​595; strategies for topic marking 592–​593; in the visual-​gestural modality 598–​606 Interface Hypothesis 570 interrogatives 77, 405; clefted question analyses 242–​243; embedded 232–​233, 239, 250, 252–​255, 299, 601; in ISL 253; long-​distance extraction of interrogative signs 239; ‘no movement’ analysis 243–​244; position of interrogative signs 233–​246; question-​ answer pairs 252, 254–​255, 495n3, 560n9, 598, 601–​602; rhetorical 601; sentence-​ final interrogative signs undergoing focus movement 239–​241; single sentence-​initial interrogative signs 237–​238; tag 491, 492; wh-​ questions 78, 79, 83, 129, 250–​252, 545, 546, 554, 560n9, 601, 609n4; yes/​no 8, 78, 80; see also content interrogatives intonation: in ASL 77; contours of 76–​77, 87; cues of 85; patterns of 491; phrases 71, 76–​78, 80–​81, 85, 326, 339, 341, 342, 535, 536, 537, 559; see also prosody IPSL see Indo-​Pakistani Sign Language (IPSL) Iranian Sign Language (ZEI) 492 Irish Sign Language (Irish SL) 491, 493 ISL see Israeli Sign Language (ISL) island extraction 460 isomorphy 79; see also non-​isomorphism (non-​isomorphy) Israeli Sign Language (ISL): backwards verbs in 114; classifiers in 101, 142–​143; compared to homesigns 657; embedded interrogatives in 253; free aspectual markers in 200–​201; indexical pointing in 216; markers for definiteness 407; modal meaning in 494; Non-​dominant Hand Spread in 75, 76; non-​manual markers in 80; phonological structure of signs in 8; prosodic phrasing in

692

693

Index 535; prosody in 74, 74, 75; restrictive relative clauses in 339; verb agreement in 22, 102 Italian Sign Language (LIS): agreement acquisition in 125; and bimodal bilingualism 626, 628, 631n6; code-​blending studies 624; content interrogatives in 233; correlative clauses in 337; crossover effects in 460; externally-​headed relative clauses in 332–​333; free aspectual markers in 200; free relatives in 335; indexical points in 220; internally-​headed relative clauses in 328, 330–​331; left periphery in 597; modal meaning in 492; negation in 268–​270; non-​ restrictive relative clauses in 341–​344; null arguments in 305; order of signs in noun phrase 409; possessives in 226; question-​ answer pairs 602; restrictive relative clauses in 339; role shift in 360; sign duration in 214; sign for non-​specificity 407; sluicing in 304; topic marking in 592; use of space in 464; VP ellipsis in 301–​302; wh-​doubling in 243; wh-​movement in 244–​245, 246; word order patterns in 223–​224 Japanese Sign Language (JSL): use of blinks 77; hand configurations for masculine/​ feminine referents 6, 169n6; head positions and movements in 545–​547, 555; modal meaning in 492; negation in 279; relative clauses in 347n30; syntax in 533 Jordanian Sign Language (LIU): determiner phrases in 214; negation in 279 JSL see Japanese Sign Language (JSL) Kata Kolok 170n14, 464 kids of deaf adults (KODAs) see children of deaf adults (CODAs) kiki/​bouba effect  54 kinematics 205 language acquisition: by adult learners 640–​641; age of (AoA) 40, 46, 55, 57, 58, 60, 82, 451–​453, 607–​608, 676; in ASL 40, 125, 256–​257, 393, 445, 607; agreement 125, 126; in Auslan 175–​176, 177; in BSL 125, 177; by children 640–​641, 642–​643, 656, 656–​658; by children of deaf adults (CODAs) 615; classifier acquisition in 175–​179; content interrogatives 256–​257; by deaf children from deaf parents 125–​126; in DGS 82, 125, 126; difference between child and adult learners 640–​641; first language 124, 129, 175–​177, 451–​453; frequency boosting in 643; in HKSL 125, 208; in Libras 125, 175–​176, 177, 257, 607–​608; in LIS 125; manual markers for conditional clauses 83; perspectives on 39–​41; and negation

282–​284; of non-​manual markers 550–​551; in NSL 393–​394; of scalar implicatures 450–​453; second language 177–​179; in NGT 394; of spatial language 392–​394; stages of 256; studies of 82–​84; in TİD 393; of topics 607; in TİD 393; see also language emergence Language Acquisition Device (LAD) 641–​642 Language Bioprogram Hypothesis (LBH) 641–​642 language dominance 618 language emergence 636–​658; child learning mechanisms 642–​643; difference between child and adult learners 640–​641; emerging sign languages 647–​656; experimental evidence 640–​656; gestural language creation 645–​646; homesign 643–​645; intergenerational transmission and structure 646–​647; from pidgin to creole 641–​642; spatial agreement/​morphology 652–​655; structure from acquisition processes 639–​640; structure from cultural processes 639; structure from the mind/​biology 638–​639; theoretical accounts 637–​639; see also language acquisition language experience 36, 47; and prosodic cues 81–​82 language processing see processing (language) Language Synthesis model 257, 629–​630 layering 548; prosodic 77, 82; Strict 546 left hemisphere activation 48, 53, 57, 86, 183–​186, 188, 190, 287–​288, 385, 386, 388, 395, 557, 579; in ASL 48 left periphery: 240, 274, 336, 344–​346, 357, 432, 546, 595–​598, 596, 599, 606, 609n3; in ASL 597; in DGS 597–​598; in LIS 597 Leftward Movement Approach (LMA) 234 lengthening 71; phrase-​final 72, 77, 89, 482; syllable-​final  84 “Less is More” hypothesis 641 lexical access 40, 46, 49, 53, 55–​58, 579 lexical aspect (Aktionsart/​event structure) 194–​198; components of 196; feature-​based description of 195; inventory of 199; nano-​ syntax model for cross-​linguistic analysis 195, 196; types of 194; see also event structure Lexical Conceptual Structures (LCS) 103 lexical familiarity 50 lexical frequency 46, 49, 50 lexicality effect 49–​50, 579 lexical processing 45–​62; in ASL 60; in Auslan 53; and bimodal bilingualism 59–​61; cross-​ linguistic influences on 59–​61; deafness, plasticity, and the language network 46–​47; and iconicity 54–​55; in LSE 56, 57; sign

693

694

Index processing 47–​51; sign production 51–​54, 58–​59; sublexical units 55–​58 lexical retrieval 34, 52, 54, 55, 188 lexical segmentation 579 lexico-​semantic processing 47 Libras see Brazilian Sign Language (Libras) Linear Correspondence Axiom (LCA) 603–​604 linguistic knowledge 36, 39, 42, 190, 451 lip movements 492; pursed lips 483, 484, 492; see also mouth gestures, mouthings LIS see Italian Sign Language (LIS) listability problem 381 list-​buoys 450, 495n4, 605–​606, 606 LIU see Jordanian Sign Language (LIU) location 4–​11, 57, 58, 72, 179; acquisition of 177; processing of 382; and semantic clusters 52; and sign processing 56; see also loci locative expression 388–​390, 396; in DGS 388–​389, 389; in TİD 389 loci/​locus: agreement 150; in ASL 446, 447, 448, 502–​503, 513, 515, 518, 523; conditions of 515; doubling 383–​384, 394; embedded content interrogatives 513–​515; empty 135; erasure 525n5, 605; establishment of 309; as features 507; gender specification 517; height specification 410–​411, 470, 515–​518, 523; indexing 505; in LSF 502–​503, 516, 523; as morphosyntactic features 465–​468; pictorial 470–​471; placement 465; R-​397n6, 410–​411, 411; plural 436, 513–​514, 523; spatial 446, 447, 448; underspecification of features 468–​470; sub-​514–​515; as variables 466–​468, 502–​504; as variables or features 466–​468, 502–​503, 506–​507, 526n15; see also location logical visibility: classifier constructions 518–​522; hypotheses of 500–​501; iconic variables 512–​518; individual, time and world variables 504–​506; loci as variables 502–​504; role shift as visible context shift 508–​511; and semantics 500–​501; Strong Hypothesis of Variability 506–​507; theoretical directions 522–​525; variable visibility 501–​502; visible event decomposition 512; Weak Hypothesis of Variability 506 LSA see Argentine Sign Language (LSA) LSE see Spanish Sign Language (LSE) LSF see French Sign Language (LSF) LSFB (French Belgian Sign Language) 490 LSQ (Quebec Sign Language) 617, 624, 625–​626 magnetoencephalography (MEG) 46, 47 male-​female paradigm 6, 169n6 manner: features 15; node 11

manual flip see hand movements; palm-​up (manual flip) manual markers/​marking of prosody: in acquisition 84, and audiovisual prosody 88; of information structure 591; for conditional clauses in acquisition 83; lengthening/​holds 71; pauses 71; transitions 71 mapping 54, 163; iconic 470–​471 markedness principle 366 markers and marking: agreement 219; in ASL 206–​207, 416; aspectual 202, 205, 603; in Auslan 207, 207; bound markers of aspect 202–​205; for contrast 599–​601; of definiteness 220, 403, 404, 405–​407, 408, 412, 418; directionality 95–​96; discourse 482, 489–​490, 489, 495n4; in DGS 408; for domain restriction 416–​417; in DSL 407; of event structure 206–​207, 207; exhaustive 217; in FinSL 593; free aspectual 200–​202; in HKSL 408; in HZJ 206; indefinite 407–​409, 409; in IPSL 200, 203; in ISL 407; layering of 82; in LSC 428, 429; in LSE 203; morphological 428; in NicSL 203; in ÖGS 207, 207; possessive 225; prosodic markers 70, 77, 80, 82, 83, 339, 345, 354, 358, 595, 599, 608; question non-​manual 253; reciprocal 105; referential use of 418; repetition as 227n2; in SLF 429; for specificity 408; in SSL 203; in TİD 202, 205, 207; of verb agreement 95–​96; of verbal plurality 429; see also boundary markers/​ marking; focus marking; manual markers; non-​manual markers; topic markers Match Theory 534, 536, 559n4 Maxims of Manner and Quality 454 memory see short-​term memory (STM); working memory mental rotation 395 metaphors/​metaphorical 25, 103, 111, 568; time 569–​570 Minimalist Program 108, 112 minimal pairs 574, 575 modals: in ASL 492; in Auslan 492; deontic 415, 492; in DGS 492, 493–​494; epistemic 492, 495; in ISL 494; in JSL 492; in Libras 492; in LIS 492; in LSE 492; modal indexing 505–​506; modality and role shift 372; modal markers of uncertainty 492; modal meaning 491–​494; modal particles 480, 491; in ÖGS 492 monomorphemic signs 1, 14, 15, 16, 17, 26n8 monosegmental signs 14 Monosyllabicity Constraint 73–​74, 73 moraic layer 17 moras  71–​73 morphemes: in ABSL 199; abstract 148; aspect-​denoting 199; bound 551, 558; bound

694

695

Index aspectual 202; classifier 163; collective 429; concrete 148; and constraints 4, 7; cranberry 28n34; directional 102–​103; distributive 429; exhaustive 430; expressing plurality 430; free 551, 558; grammaticalization of 199; phonological properties of 15, 22, 25; in RSL 429, 430; see also classifiers morphology: allocative 435; in Al-​Sayyid Bedouin Sign Language (ABSL) 653; concatenative operations 16; conflation with phonology 25; distributive 435; and gesture 574; in LSC 435; Morphological Structure 26, 148; in Nicaraguan Sign Language (NSL) 653–​654; quantificational 428–​431; reduplicative 430; of sign languages 575–​576; in sign languages 652; and verb agreement 99–​100, 100, 103–​104 morpho-​syntactic identity  303 morphosyntax 71, 533, 534; and classifiers 168; and verb agreement 123–​124, 127, 134, 148 motion events 578–​579 motor embodiment 391; in ASL 391 motor theory of sign perception 391 mouth gestures 75, 408, 409, 492, 493, 494, 531, 568; in ASL 75; lip movements 492; pointing (lips) 568; pursed lips 483, 484, 492; spreading 75; tensed upper lip 549; tongue protrusion 531 mouthings 75, 256, 257, 482, 487–​488, 549, 558, 559, 560n12, 667; lexical 532 movement 81; compared to vowels 72; secondary 11, 27n20; and syllables 72 movement types 11, 19, 57, 58, 152, 179; local movement 11; path movement 11; and verbal root 153 Multi-​Dominance analysis  251 M unit 16–​17 natural experiments 631, 637, 638, 640 negation 86, 256, 266–​291; acquisition of 282–​284; in ASL 271–​273, 274, 276–​277, 278; in Auslan 270; in BSL 287–​290; contrastive 273–​275; in DGS 269–​270, 279–​281, 287; double 291n4; experimental perspectives 282–​290; final-​over-​final constraint 267–​268; in GSL 276; in HKSL 270–​271; in a homesign system 284–​287; in HZJ 273–​274; in JSL 279; in Libras 273; in LIS 268–​270; in LIU 279; in LSC 269–​270; manual sign for 538; markers of 358; Negative Concord (NC) 273, 280; neurophysiology of 287–​290; non-​manual markers of 275–​277; in ÖGS 270; other distributions of 273–​275; position of in clause structure 267–​275; theoretical perspectives 267–​282; in TİD 268–​270, 276, 277, 279, 280–​282, 287; types of 284;

typological issues 175–​282; typology based on feature values 280–​282; typology based on where [+neg] occurs 278–​280 Negative Concord (NC) 273, 280 Negative Polarity Item (NPI) 511 neighborhood density 53, 55 neuroimaging and neurocognitive studies 46–​47, 51, 53, 59, 60, 85–​87, 190–​191, 571, 582, 583; in BSL 86, 184–​186, 581–​582; on classifier acquisition 183–​188; of negation 287–​290; neurophysiology 54, 287–​290; and visual imagery 395; of working memory 672–​673 New Zealand Sign Language (NZSL): discourse markers in 489; non-​manual markers in 415; turn-​taking signals in 482–​483 NGT see Sign Language of the Netherlands (NGT) Nicaraguan homesign 214, 649 Nicaraguan Sign Language (NicSL) 117, 395, 636, 647–​657; bound markers of aspect in 203; compared to homesigns 657; future directions for study 655–​656; nouns in 214; spatial agreement in 653–​655; spatial language abilities 395; word order in 648–​649, 651–​652 non-​arbitrariness see iconicity Non-​dominant Hand Spread 75, 76, 76 non-​dominant language  60 non-​isomorphism (non-​isomorphy) 79, 80, 548, 533–​534, 554 non-​linguistic processing  394 non-​manual cues  135 non-​manual markers (NMMs) 79, 83, 84, 88, 101–​102, 102, 238, 244, 248–​250, 326, 328, 334, 341–​342, 344, 415, 444, 530–​559, 568, 591; acquisition of 550–​551; analysis of null arguments 532–​534; argumentation and testing claims 531; in ASL 249–​250, 538–​539, 555; in Auslan 415; blinks 77, 80; in BSL 405, 557; for definiteness and indefiniteness 405–​409; in DGS 538; dual-​function 550; effect of signing rate on 553–​554; evaluation of 547–​550; experimental perspectives 550–​557; eye-​tracking of 556–​557; form and function in content interrogatives 248–​250; functioning as intonation 549, 551, 559; historical development of the treatment of 532–​534; interaction of syntax, semantics, and prosody 534–​538; interpretation of 558–​559; interrogative 249–​250, 538–​540, 560n11; in ISL 80; lexical 560n12; linguistic functions of 530; in LSC 538; in LSF 560n9; and modal meaning 492, 494; as morphemes 558; morphological functions of 547; motion capture of 554–​556; of negation 86,

695

696

Index 275–​277, 358, 557; neural processing of 557; nose wrinkle 258, 337, 408; in NZSL 415; in ÖGS 415; perception of 556–​557; and point-​ of-​view operators 356–​358; prosodic analysis 532–​537, 556; prosodic function of 548; referential use of 418; relationship to manual signs 547–​550; in RSL 408; of the scope of [+wh] operators 249; and semantics 540–​547; semantic/​syntactic functions of 549; syntactic analysis 532–​533, 536; syntactic approaches to 538–​540; in TİD 249–​250, 538–​540, 560n11; topic markers 358; as turn-​taking signals 481–​485; in wh-​questions 256, 358 non-​signs  49–​50 non-​specificity, signs for 407, 408, 408 Norwegian Sign Language (NSL): acquisition of 393–​394; determiner phrases in 214; discourse markers in 489 nose wrinkle 258, 337, 408 noun phrases: in ASL 409; definite 403; in DSL 409; indefinite 403; in LIS 409; noun-​verb pairs 214–​215; order of signs within 409–​410 nouns: building 213–​215; incorporation of 142–​146; in NicSL 214; simplex 226n1 noun-​verb pairs 214–​215; in ASL 213–​214 NP/​DP parameter 216–​219 NSL see Norwegian Sign Language (NSL) null arguments 295–​296; acquisition of (syntactic factors) 315–​317; acquisition studies 315–​320; antecedent activation of 311, 313; in ASL 149, 298–​301, 305, 310, 315–​317; and bimodal bilingualism 620–​621; in BSL 318–​320; ellipsis analysis of 304–​306; experimental perspectives 309–​321; in FinSL 304; in LIS 305; psycholinguistic studies with adults 311–​315; reference tracking in 317–​320; in spoken languages 296–​297; studies of adult L2 learners 320–​321; theoretical perspectives 295–​307 numerals 144, 222–​224, 405–​407, 413, 424–​426, 446, 476, 605 NZSL see New Zealand Sign Language (NZSL) ÖGS see Austrian Sign Language (ÖGS) one-​handed signs 12, 22; see also signs Operator-​Restrictor-​Nuclear scope 431–​433, 431 Optimality Theory 26n12 orientation (hand) 7–​8, 103–​104; absolute direction of palm/​fingers 7; acquisition of 177; change in 8; downward movement 9; and verb agreement 122 palm-​up (manual flip) 257–​258, 286; as backchannel signal 484–​485, 485; used for

coherence 488–​490; for connecting clauses 488; as discourse marker 489, 494; for evaluative and epistemic meaning 488, 492, 493; for negation 286; as turn-​taking gesture 482–​484, 484 parallelism, syntactic 600 Parallel Merge analysis 251 particles 491, 596; in ASL 491; in DGS 491; focus 531, 594; one-​based definite 404; question 246–​248; in RSL 491; zero 341; see also discourse particles; response particles partitive constructions: in LSC 417, 417 partitive specificity 415–​417 path movement 10, 18, 87, 98; non-​interpolation 27n19; and verb agreement 103, 122 patterning 23; duality of 1–​2; see also double articulation patterns: intonational 491; of mouth spreading 75; of movement 381; universal 222; of verb agreement 107; word order 222–​225 pause fillers 483–​484 pauses 77, 80, 81, 82, 341, 342, 483, 492, 553–​554 perception: audio-​visual 87; auditory 19; in bilingual bimodals 614; categorical 34–​36, 56, 40, 42, 183; color 189; enhancing 88; language 33–​34, 38–​39; linguistic experience 33, 36–​39; motor theory of 391; by native/​ non-​native signers 38–​39, 38–​39; of non-​ manual markers 556–​557; prosodic 80–​82; rhythmic 80–​82; of sign language 19, 35, 579, 584; of speech 41, 615; studies of 42, 128–​130, 190, 668; of topic marking 607; verbs of 196; visual 19, 33; visual-​spatial  394 perceptual saliency 56 perspectives, mixing 510 phonemes 13–​14, 15, 39, 47, 52, 57; variations in 35 phonetics 21, 22; pre-​specified properties 24; variations 36, 39 phonological comprehension: acquisition perspectives 39–​41; in ASL 34, 37; and categorical perception 34–​36; coarticulatory effects 41; and linguistic experience 36–​39; and perceptual sign language characteristics 33–​34 phonological decoding 58 phonological encoding 54, 58, 59 Phonological Form (PF) 148 phonological parameters 46, 54, 58–​59; handshape 4–​8; location (Place of Articulation) 4–​11, 114; movement types 11 phonological phrases 75–​76, 78, 537 phonological priming 57 phonological processing 39, 49; and sign recognition 40–​41 phonological structure 26; in ISL 8

696

697

Index phonological tasks 52 phonological utterances 71 phonological words 537 phonology: acquisition perspectives 39–​41; basic units and constraints 1–​4; and categorical perception 34–​36; coarticulatory effects 41; and comprehension 41; conflation with morphology 25; dependency 5; Dependency Phonology 5; exemplar-​based 23; and gesture 573; grammatical 20–​21, 21–​23, 26; handshape 4–​7; iconicity 23–​25; interface with syntax 545; and language processing 41; and linguistic experience 36–​39; location 4–​11, 114; movement types 11–​13; orientation 7–​8; and perceptual sign language characteristics 33–​34; and the prosodic hierarchy 71; and prosodic markers 595; rules 19–​23; semantic 28n36; of sign languages 574; signs as single segments 13–​16; syllable structure 16–​19; utterance 20–​21; utterance vs. grammatical 1, 19–​23, 26; see also cheremes; comprehension, phonological phonotactic structure 49 phrase-​final lengthening see lengthening picture-​naming paradigms  58 picture-​sign interference tasks 52, 58, 59 pidgin languages 641–​642 PJM see Polish Sign Language (PJM) place node 14 Place of Articulation (PoA) see articulation pluractionality 429 plurals 513–​515; dynamic semantics of 475–​476 pointing 568; in E-​type theories 474; gestural 459; and null arguments 299; using lips and feet 568; see also indexical pointing point of view 557; character (CVPT) 392 point-​of-​view operators (PVOps) 358, 372; and non-​manual marking 356–​358 Polish Sign Language (PJM): indexical pointing in 220–​221; word order patterns in 223 positron emission tomography (PET) 46, 53, 185 possessives 143, 225–​226, 404; in ASL 225–​226; in BSL 225; in LIS 226; in LSC 225; in TİD 226 Possible Word Constraint (PWC) 37 poverty of the stimulus argument 638, 658n1 pragmatics 440, 442, 459, 480, 535; Maxims of Manner and Quality 454; and modal meaning 494; post-​compositional 444; and semantics 454–​455; studies of 452, 454; and verb agreement 108 Predicate Argument Structures (PAS) 103

predicates: classifier 149–​151, 150, 164–​168, 379–​380, 381, 382, 394, 454; in HKSL 153; semantic analysis of 164–​168; syntactic structure of 162–​163; see also verbs priming 56, 60; in ASL 128–​129; cross-​ linguistic 621; cross-​modal 56; effects 46, 571, 579, 621; implicit bimodal 60, 622–​623; inhibitory 56; phonological 57, 539; semantic 46, 49, 50, 55; studies of 50, 56–​57, 128–​129, 136, 571, 579 Probe-​Goal theory  245 probe recognition tasks 383 processing (language): in ASL 129, 258, 394; brain activation during 579–​580; and cross-​ modal plasticity 46–​47; grammatical 387, 394; role of phonology 41; sentence 258; of spoken language 47, 61–​62; supramodal stage 41; syntactic 129; see also lexical processing; processing (sign) processing (sign): in BSL 48, 52, 56, 57; frequency effect 50; lexical 48, 52, 56, 57; and lexical access 49; lexicality, lexical frequency, and semantic effects 49–​51; lexicality effect 49–​50; and location 56; and the phonetic processing of signs 2; and sign production 51–​54; signs vs. body movements and gestures 47–​48; of sublexical units 46, 55–​58; see also processing (language)pronouns: ambiguity in 501; in ASL 407, 460–​463, 470, 504; bound readings of 461–​464; clitics 116; co-​referent 461–​464; in DGS 407, 462, 468–​469; diverse uses of 504–​506; donkey 473; iconicity in 523; indefinite 406–​407, 413–​414; in LSC 406–​407, 413–​414, 462–​463; in LSE 407; in LSF 462–​463, 470–​471; neutral 468; null 460; in null arguments 311–​315; plural 523; pointing signs for 96; resumptive 461; in RSL 462; semantics of 461–​464; in sign language and spoken language 461–​464; in sign languages 504; syntax of 460–​461; theories of 459; use by deaf children 126–​127; see also anaphora; discourse anaphora proprioceptive feedback 53, 54 proprioceptive monitoring 59 prosodic reorganization in 80; in ASL 80; prosodic focus marking 608; prosody in 81 prosody 70–​89, 190, 545; in ABSL 85; acquisition of 82; in American Sign Language (ASL) 80, 81, 545, 608; audio-​ visual vs. sign language 87–​88; boundaries 77, 80, 81, 84, 494, 551, 559, 592; breaks 83, 252, 488, 545, 592, 594; cues 81–​84, 89, 534; in DGS 82; experimental studies 80–​87; features 80, 85, 87, 114; and focus 598–​599; focus marking in ASL 608; in HKSL 77, 88; and intonation 71, 76–​78, 80, 81, 85, 87, 491,

697

698

Index 537–​538; intonational phrases 71, 76–​78, 81, 85, 326, 339, 341, 342, 535, 536, 537, 559; in ISL 74, 74, 75, 535; and language acquisition 82–​84; linking 537; in LSC 88; manual markers of 71, 83, 84, 88, 591; markers 70, 77, 80, 82, 83, 339, 345, 354, 358, 595, 599, 608; marking of clause dependency 85; marking syntactic derivations with 545–​546; neurolinguistic studies 85–​87; non-​manual markers 70–​71; perception of 80–​82; and phonological phrase 75–​76; phrasing 80, 535, 558; and the prosodic hierarchy 71, 73, 75, 80, 88, 534–​535, 536, 544; Prosodic Model 17–​18, 72, 81, 87; prosodic reorganization in ASL 80; and prosodic structure 84–​85; and prosodic word 73–​75; signals 481–​482; in spoken languages 81, 89; spreading 537; structure 78–​80, 89, 535–​536; the syllable and mora 72–​73; syntactic vs. prosodic structure 78–​80; syntax-​prosody interactions 544–​545, 548, 555; theoretical description 70–​71; words 73–​74, 73–​74; see also non-​manual (prosodic) marking pseudoclefts 335, 346n14; see also question-​ answer clauses/​pairs psycholinguistics 50, 51, 55, 62, 455; and bilingualism 614; perception studies 128; production studies 128; of scalar implicatures 445; and spoken language production 53 pursed lips 483, 484, 492; see also mouth gestures quantification 417, 423–​436; in ASL 425, 426, 428, 436; in ASL 425, 426, 428, 436; lexical 424–​428; in LSC 425, 427, 432; in LSF 436; markers of domain restriction 416; morphology of 428–​431; in natural languages 436; and space 435–​436; quantificational binding 468, 488; in RSL 425; structural aspects of 431–​435; tripartite structures of 431–​433, 436 quantifiers 446, 448; A-​ 423–​424, 426–​428, 433, 436; cardinal 423; complex 423, 425; D-​ 423–​426, 432, 436; existential 425; lexical 424–​428; proportional 423, 425; scope ambiguities 413, 424–​425, 433–​434; universal 423, 425 Quebec Sign Language (LSQ) 617, 624, 625–​626 question-​answer clauses (QACs) /​pairs 252, 254–​255, 495n3, 560n9, 598, 601–​602; in ASL 602; in LIS 602; in LSF 602 question markers 560n8 question particles 246–​248; in IndSL 246–​248 Question Under Discussion Theory 255 questions see interrogatives; wh-​questions

quotations 511, 524; mixed 373n8; quoted utterances 359 reaction time studies 128–​129 reciprocal signs: in DGS 22 reduplication 73; dual 434–​435; random 434–​435 reference time representation 205–​208 reference tracking: in ASL 310; and classifier morphemes 149; discourse requirements for 317; in DGS 310; null and overt arguments 317–​320; preferences 310; and verb classifiers 152 Relational Grammar 108 relative clauses: in ASL 326, 328, 330, 332, 339; correlative clauses 325, 326, 336–​338, 341, 344, 345; in DGS 332, 334, 339, 341; externally-​headed (EHRC) 331–​334, 345; free relatives 334–​336, 346; in HKSL 328, 332, 334; internally-​headed (IHRC) 327–​331, 345–​346; in ISL 339; in JSL 347n30; in Libras 332; in LIS 328, 330–​334, 339; in LSC 328; non-​restrictive (appositive) (NRRCs) 341–​346; restrictive (RRCs) 338–​341, 345; stacking 330, 332; in TİD 328, 332, 339, 341 repetition: of movement 214; in RSL 214; two-​handed alternating  430 response particles 485–​488; in ASL 485–​487; in DGS 485–​488 resumptive pronoun 147, 325, 461 rhetorical questions 254–​255, 598, 601 rhythmic perception: in ASL 80–​82 Rightward Movement Approach (RMA) 234 R-​loci 397n6, 410–​411, 411 role shift 140, 165–​167, 170n15, 351–​372, 464, 483, 500; Action Role Shift (AcRS) 165, 354–​356, 367, 510–​511, 520, 521–​522, 524, 526–​527n18; in ASL 353, 358, 360, 510–​511, 522, 526–​527n18; Attitude Role Shift (AtRS) 354–​356, 360, 362, 364, 510, 520, 521–​522, 524, 526n22, 526–​527n18; context-​shifting operators and indexicals 359–​363; in DGS 354–​355, 355, 356, 360, 363, 365, 369–​370, 369, 370, 371, 371, 526–​527n18; in DSL 360; and gestural demonstrations 363–​370; iconic effects in 520–​522, 524; in LIS 360; in LSC 352, 526–​527n18; in LSF 353, 510–​511, 526–​527n18; mixing of perspectives vs. shift together 509–​510; and modality 372; as morpheme 170n17; multiple perspectives 370–​372; non-​manual marking and point-​ of-​view operators 356–​358; quotational 354; and sentential complementation 352–​353; in signed discourse and narration 363–​371; as visible context shift 508–​511, 524 root compounds 145, 151–​152

698

699

Index RSL see Russian Sign Language (RSL) rules: allomorphic 20; allophonic 20, 22; grammatical phonology and utterance phonology 19–​21; lexical vs. post-​lexical 28n29; phonological 35; well-​formedness  37 Russian Sign Language (RSL): classifiers in 170n14; distributive morphemes in 429; doubling in 603; exhaustive morphemes in 430; focus marking in 594–​595, 599; hand holds in 605–​606; indexical points in 220; information structure in 591; non-​manual markers in 408; particles in 491; pronoun use in 462; quantification in 425; repetition of movement in 214; spreading of the non-​ dominant hand in 76; topic marking in 592, 593 scalar implicatures: acquisition of (experiment) 451–​453; acquisition of (theory) 450–​451; in ASL 445, 448–​453; and age of language acquisition 451–​453; based on conjunction/​ disjunction 448–​450; embedded 444; experimental work on 440, 445; and pragmatics 445; scalar words 445; in the sign modality 446–​448; theories of grammar of 445 scope: ambiguities in 413, 424–​425, 433–​435; ambiguities in ASL 434; ambiguities in LSC 434–​435; specificity of 412–​414 SE see Signed English (SE) second language, written 60 secondary movements 27n20; hooking 11; nodding 11; pumping 27n20; rubbing 27n20; scissoring 27n20; twisting 11 segmentation 37, 205, 573; event 205, 208; lexical 579; visual 205 Selected Fingers Constraint 73–​74 semantic: bootstrapping theory 638, 639; clustering 52; effects in comprehension 49; operator 494, 532, 542–​544; priming 46, 49, 50, 55, 571; processing 47–​51, 54, 131, 571 semantico-​syntactic tests 196–​198 semantic phonology theory 52 semantics 79, 535; alternative 491; and the brain 579; compositional 440; dynamic 471–​476, 476; dynamic semantics of plurals 475–​476; lexical 103, 440; logical visibility and 500–​501; natural language 423; and pragmatics 454–​455; of pronouns 461–​464; reduplicative 430; studies of 452 semelfactives 194–​195 sentential complementation 352–​353 sequencing: intrasegmental 19; left-​to-​right scope 560n11; linear 19; right-​to-​left scope 560n11; segment 16; sonority 2; suprasegmental 19; temporally-​ based 668

serial recall 56, 666–​667, 669–​673, 674, 675, 677–​679; in ASL 671, 672 setting features/​units  8 Sharing analysis 251 shifting 360–​363 short-​term memory (STM) 664; in ASL 665; phonological 665–​670, 677–​678, 678–​679; visuospatial 673, 674–​675 shoulder movements 549 see also shrugs shrugs 286, 404, 484, 492, 493 sign comprehension 58; modality-​specific aspects of 46; see also comprehension, lexical processing Signed English (SE) 552 signers: adult fluent 191n4; adult L2 learners 320–​321; adult native 175, 310; aphasic 59; atypical 580–​581, 582; brain-​lesioned 46–​48, 57, 59, 86, 184–​185, 287–​290, 385, 557, 580; BSL 393, 581, 582; children of deaf adults (CODA) 552–​553; deaf 36, 37, 39, 46, 54, 55, 58, 60, 557; deaf children of deaf parents 318–​319; deaf early learners 80; deaf late learners 80, 445, 642–​643; deaf native 36, 41, 48, 52, 57, 80, 81, 82, 83, 127, 128–​129, 181, 186, 315–​317, 317–​320, 393, 452–​453, 551; deaf non-​native 36, 38, 41; early L1 learners 129; early non-​native 452–​453; hearing 46; hearing late signers 36, 60; hearing native 317–​320; hearing non-​signers 36, 39, 48; homesigners 84–​85, 257; infants as 83; L1 394; L1 hearing 83; L2 hearing 83; L2 learners of 320–​321, 393; late non-​ native 452–​453; later L1 learners 129; left-​ hemisphere damaged (LHD) 86, 184, 185, 288, 385–​386, 557, 580–​581; LSC 320–​321; native 393; native deaf 38; native hearing 38; native L1 learners 129; non-​native deaf 39, 39; non-​native hearing 39; non-​signers 179–​180, 181, 390; non-​signers (hearing) 557; with Parkinson’s disease 86–​87; right-​ hemisphere damaged (RHD) 86, 183–​184, 185, 288, 385–​386, 580–​581; in TSL 191n4; working memory in 670–​673 signing: backward 81; neural correlates of 382; rate/​speed of 80, 84, 481, 533, 553–​554; and speaking at the same time 552–​553 signing perspective 391; character 390, 392; observer 390–​392 Sign Language of the Netherlands (NGT): acquisition of 394; agreement acquisition in 125; aspectual morphemes in 202; and bimodal bilingualism 622, 626; bound markers of aspect in 203; classifier morphemes in 149; classifier acquisition in 178–​179; classifiers in 144–​145, 157, 168, 170n14; code-​blending studies 624; discourse markers in 489; distinguishing nouns from

699

700

Index verbs 214; externally-​headed relative clauses in 332; focus marking in 594–​595, 598–​601; free aspectual markers in 200; hand holds in 605; handshapes in 5, 6, 7; iconicity in 23, 24; indefinite pronouns in 407; indexical points in 220–​221; information structure in 591; left periphery in 596, 597; modal meaning in 493; mouth spreading patterns in 75; negation in 291n8; non-​manual markers in 408; particles in 491; root compounds in 151, 151, 152; sign location in, 8, 9; spreading of the non-​dominant hand in 76; topic-​copying in 593, 609n5; topic marking in 592; turn-​taking signals in 482; two-​ handed signs in 12; verb agreement in 100, 102, 106; verb types in 158–​161; word order patterns in 223–​224 sign languages: contrasts with spoken languages (interrogatives) 244–​246; differences between 2; differences from spoken languages 233, 245–​246; and dynamic semantic theories 474–​475; emerging 383, 566, 636, 647–​656; grammatical non-​manual markers in 258; grammatical phonology of 21–​23; iconic and gestural elements 577–​579, 583–​585; iconicity in 577–​578; and language modality 572–​574; modality-​independent and modality-​dependent aspects of 574–​577; morphology of 575–​576; negative concord (NC) 280–​282; non-​linguistic (gestural) elements of 382; perceptual characteristics 33–​34; phonology of 574; pronouns in 464; response of babies to 40–​41, 81, 83; rural 383, 424; social and linguistic contexts of 572–​573; SOV 268–​270, 328; SVO 270–​273; syntax of 575–​576, 577; typology of (negation) 278–​282; urban 424; see also spatial language sign languages by name: Al–​Sayyid Bedouin Sign Language (ABSL) 85, 199, 647–​656, 657; Argentine Sign Language (LSA) 100, 156; Asian sign languages 6; Australian Sign Language (Auslan) 53, 175–​176, 177, 179, 214, 672; Austrian Sign Language (ÖGS) 207, 207, 270, 415, 482, 492; Beijing Sign Language 139; Brazilian Sign Language (Libras) 104, 106, 114–​116, 125, 175–​176, 177, 220, 233, 241–​242, 257, 273, 332, 412, 492, 603, 604, 607–​608, 619, 624; British Sign Language (BSL) 37, 48, 52, 56, 57, 86, 125, 177, 179–​180, 184–​186, 200, 225, 287–​290, 318–​320, 380, 386, 393, 405, 485–​488, 489, 557, 572, 575, 581–​582, 582, 585, 673; Catalan Sign Language (LSC) 54, 88, 96, 96–​99, 98, 104, 105, 106, 114, 156, 225, 246, 269–​270, 320–​321, 328, 345, 352,

406–​407, 408, 408, 412, 413–​414, 416–​417, 417, 425, 427, 428, 429, 432, 434–​435, 462–​463, 508–​509, 510, 526–​527n18, 538, 599–​600; Chinese Sign Language 2; Croatian Sign Language (HZJ) 199, 200–​201, 203–​205, 204, 206, 208, 220, 273–​274; Danish Sign Language (DSL) 142, 360, 407, 409, 482, 483, 489, 490; Finnish Sign Language (FinSL) 246, 304, 593, 624; Flemish Sign Language (VGT) 482; French Belgian Sign Language (LSFB) 490; French Sign Language 166–​167, 353, 386, 429, 436, 462–​463, 470–​471, 502–​503, 510–​511, 516, 520, 523, 526–​527n18, 594, 560n9, 602, 671; German Sign Language (DGS) 22, 82, 100, 105, 106, 112, 123, 125, 126, 134, 144, 146, 147, 152, 170n14, 200, 269–​270, 279–​281, 287, 310, 332, 334, 339, 341, 354–​355, 355, 356, 360, 363, 365, 369–​370, 369, 370, 371, 371, 386, 388–​389, 389, 392, 407, 408, 462, 468–​469, 482, 485–​488, 489, 489, 491, 492, 493, 493–​494, 510, 526–​527n18, 538, 593, 594, 597–​599, 604, 622; Greek Sign Language (GSL) 100, 200, 202, 276; Hong Kong Sign Language (HKSL) 77, 88, 125, 139, 152, 153, 157, 158–​159, 160, 168–​169, 169n12, 176, 200–​201, 208, 220, 222, 270–​271, 328, 332, 334, 345, 406, 408, 592, 593; Indian Sign Language (IndSL) 246–​248, 609n4, 609n7; Indo-​Pakistani Sign Language (IPSL) 200, 203, 216, 246; Iranian Sign Language (ZEI) 492; Irish Sign Language (Irish SL) 491, 493; Israeli Sign Language 8, 22, 74, 74, 75, 76, 80, 101, 102, 114, 142–​143, 200–​201, 216, 253, 339, 407, 494, 535, 657; Italian Sign Language (LIS) 125, 200, 214, 220, 223–​224, 226, 233, 243, 244–​245, 246, 268–​270, 301–​302, 304, 305, 328, 330–​333, 335, 337, 339, 341–​344, 360, 407, 409, 460, 464, 492, 592, 597, 602, 624, 626, 628, 631n6; Japanese Sign Language (JSL) 6, 77, 169n6, 279, 347n30, 492, 533, 545–​547, 555; Jordanian Sign Language (LIU) 214, 279; New Zealand Sign Language (NZSL) 415, 482–​483, 489; Nicaraguan Sign Language (NicSL) 117, 203, 214, 636, 647–​657; Norwegian Sign Language (NSL) 214, 393–​394, 489; Polish Sign Language (PJM) 220–​221, 223; Quebec Sign Language (LSQ) 617, 624, 625–​626; Russian Sign Language (RSL) 76, 170n14, 214, 220, 408, 425, 429, 430, 462, 491, 591, 592, 593, 594–​595, 599, 603, 605–​606; Sign Language of the Netherlands (NGT) 5, 6, 7, 8, 9, 12, 23, 24, 75, 76, 100, 102, 106, 125, 144–​145, 149, 151, 151, 152, 157, 158–​161, 168, 170n14, 178–​179, 200, 202, 203, 214, 220–​221,

700

701

Index 223–​224, 291n8, 332, 394, 407, 408, 482, 489, 491, 493, 591, 592, 594–​595, 596, 597, 598–​601, 605, 609n5, 622, 624, 626; Spanish Sign Language (LSE) 56, 57, 109–​111, 203, 407, 492; Swedish Sign Language (SSL) 75, 200, 203; Swiss German Sign Language (DSGS) 75, 77, 82; Taiwan Sign Language (TSL) 6, 100, 169n6, 191n4, 220, 224–​225; Tianjin Sign Language (TJSL) 139, 157, 158–​159, 168–​169, 169n12; Turkish Sign Language (TİD) 200, 202, 205, 207, 214, 226, 249–​250, 268–​270, 276, 277, 279, 280–​282, 287, 332, 339, 341, 389–​390, 389, 392, 393, 435, 482, 538–​540, 560n11; see also American Sign Language (ASL) sign processing see processing (sign) signs: deictic pointing 337; disyllabic monomorphemic 72; in DSL 409; duration of 214; in focus 77; frozen 168, 169n7; gestural origins of 492; high-​frequency 53; iconic 578; iconic vs. non-​iconic 289; index 123, 409; interrogative 233–​246, 237–​238; in ISL 8; lexical 48, 493; lexical properties of 53; lexical retrieval of 52; linguistic processing of 48–​49; in LIS 214, 407, 409; low-​frequency 55; manual 415; monomorphemic 14, 15, 16, 17, 26n8; monosegmental 14; monosyllabic 72; motivated 145; negative ( see negation); neighborhood density of 53; new 24; non-​ iconic 578; for non-​specificity 407; one-​ handed 12, 22; order of within noun phrase 409–​410; orientation of 8; phonological encoding of 48; phonology of 8, 54; pointing 216, 219–​222, 226, 227n4, 309, 321n1; predictive 8; prenominal index 409; production of 55; psycholinguistic variables for 53; reciprocal 22; recognition of 40–​41, 49–​51, 56, 57; recognition in BSL 56; retrieval of 579; as single segments 13–​16; stressed 77; sublexical processing of 56; telic 198, 202, 204; that use space meaningfully 381–​382; two-​handed 12–​13, 12, 20, 605; two-​handed signs 22; with two or more movements 73; with one movement element 73 sign skeletons see skeletons sign space: 3-​dimensional 448; abstract use of 379, 379–​380, 380, 396; acquisition of spatial language in sign languages 392–​394; arbitrary use of 379; in ASL 383; behavioral studies 384–​385; co-​reference processing 383–​384; determined 412; diagrammatic 392; encoding of 464–​466; experimental perspectives 378–​396; gestural use of 394; grammatical use of 576; in LIS 464; locative expression 388–​390, 396; modulations in

410–​411; morphemic vs. analogue processing of location 387–​388; motivated use of 380; neuroimaging studies 385–​387; neutral use of 412; processing of topographic vs. abstract use of space 384–​385; and quantification 435–​436; referential function of 379; research questions and debates 382–​383; signing perspective and viewpoint 390–​392; signs that use space meaningfully 381–​382; spatial language and spatial cognition 394–​396; surrogate space 221–​222; syntactic 577; syntactic use of 379; token 222; topographic 576; topographic use of 379–​380, 380, 384–​387, 396; use of 381, 388–​392, 459, 464; use to express locative relationships 578 simultaneity, two-​hand  585n3 simultaneous communicators (SIMCOMs) 552–​553 size and shape specifiers (SASSes) 140; see classifiers skeletons 16–​17; bipositional 17, 19; position of 18 slips: of the hand 52, 58, 579; of the tongue 52 sloppy reading 301, 304–​305 sluicing 304; in ASL 304 somatosensory information 41, 53 Sonority Sequencing Principle 72, 73 sound symbolism 62 space see sign space Spanish Sign Language (LSE): bound markers of aspect in 203; indefinite pronouns in 407; lexical processing in 56, 57; modal meaning in 492; verb agreement in 109–​111 spatial agreement 653–​655; in Al-​Sayyid Bedouin Sign Language (ABSL) 653; in Nicaraguan Sign Language (NicSL) 653–​655; see also morphology, agreement morphology spatial cognition 394–​396 spatial encoding 389–​390, 389 spatialization process 380 spatial language: acquisition of 392–​394; in BSL 386; iconicity of 392; in LSF 386; and spatial cognition 394–​396; study 386 spatial reasoning 396 spatial relationships 569 specificity 403, 404; and domain restriction 416–​417; epistemic 413–​414; epistemicity 405; markers for 404, 407, 408; partitive 415–​417; partitivity 405; scopal 412–​414; scope 405; types of 412–​413 spoken language: feature geometry of 4; intonation in 540, 549 spreading 75–​76; syntactic 537–​538, 540; tonal 532

701

702

Index squint 326, 328–​329, 339, 344, 345, 407, 410, 548–​549; see also eye expressions SSL see Swedish Sign Language (SSL) stacking 330, 332 stimulus onset asynchronies (SOA) 50 Stokoe, William 1, 7, 13, 15, 16, 25 stress assignment 75 Strict Layering hypothesis 536 Strong Hypothesis of Variability 506–​507 sub-​articulators 14, 27–​28n26 sublexical frequency 56 subordinate clauses 337, 352; in ASL 299, 345; brow raise 545; context shift in 167; dislocation in 147, 329; displacement of 345; and head position 547; in HKSL 345; indexicals in 509; in indirect discourse 511; interrogative signs in 239, 252, 253; left periphery 597; in LSC 345; pronouns in 359; relation to matrix clause 444, 511; role shift in 166, 367; sentential complementation 352–​353; VP ellipsis in 301, 302; see also embedded content interrogatives; relative clauses Superior Temporal Cortex (STC) 47 suprasegmental features 18, 70; see also prosody surrogate space 221–​222 Swedish Sign Language (SSL) 75; bound markers of aspect in 203; free aspectual markers in 200 Swiss German Sign Language (DSGS) 75, 77, 82 switch-​reference systems  503 syllables 4, 15, 46, 71–​72; disyllabic morphemic signs 72; structure of 16–​19, 72 symbolism: in ABSL 85 Symmetry Constraint /​symmetry condition 12, 73–​74, 73, 600 syncretism 466, 469; in ASL 469; in DGS 469; spatial 468–​470 syntactic processing: in ASL 129 syntax: in ASL 83; first phase 163; and gesture 574; innate 638; interface with phonology 545; in JSL 533; of non-​manual markers 545; of pronouns 460–​461; of sign languages 575–​576, 577; violations of 386 syntax-​prosody interactions 544–​545, 548, 555; in ASL 545 tag questions 491, 492 Taiwan Sign Language (TSL): adult fluent signers 191n4; agreement auxiliaries in 100; indexical points in 220; male-​ female paradigm in 6, 169n6; word order patterns in 224 telicity (end-​marking) 162–​163, 197, 198, 202, 205, 206, 208, 215, 512, 524–​525

temporal indexing 505 thumb positions 5 Tianjin Sign Language (TJSL): classifier handshapes in 139; classifiers in 157, 158–​159, 168–​169, 169n12 Tic Tac 48, 581, 582 TİD see Turkish Sign Language (TİD) timing units 16 tip of the fingers (ToF) effects 51–​52, 58, 579; and ASL 52 tip of the tongue (ToT) effects 51–​52, 55; and ASL 52 TJSL see Tianjin Sign Language (TJSL) token space 221–​222 tonal spread 532 tones 28n28, 80, 122, 326 tongue protrusion 531; see also mouth gestures topicalization 76, 152, 237, 243, 592–​594, 597, 606–​608 topic(s): aboutness vs. scene-​setting 593; copying 593, 609n5; hanging 593, 596; left dislocation 593; moved 596; scene-​setting  596 topic markers 358, 407, 592–​593; in ASL 592, 593; in DGS 593; in HKSL 592, 593; in HKSL 592, 593; in IndSL 609n7; in LIS 592; in RSL 592, 593 topicalization 593, 596, 607–​608; in Libras 607–​608 transcription system 2 tremoring movement 406; see also hand movements TSL see Taiwan Sign Language (TSL) Turkish Sign Language (TİD): acquisition of 393; aspectual markers in 202, 205; determiner phrases in 214; externally-​ headed relative clauses in 332; free aspectual markers in 200; indefinite expressions in 435; indexical point in 220; internally-​headed relative clauses in 328; locative expression in 389; markers of event structure in 207; negation in 268–​270, 276, 277, 279, 280–​282, 287; interrogative non-​manual markers in 249–​250, 538–​540, 560n11; non-​restrictive relative clauses in 341; possessives in 226; restrictive relative clauses in 339; spatial encoding in 389–​390, 389; turn-​taking signals in 482; use of handling classifiers 392 turn-​final holds  481 turn-​taking signals 481–​485, 568; in ASL 482; in Auslan 482; in DGS 482; in DSL 482, 483; in NZSL 482–​483; in ÖGS 482; in TİD 482 two-​handed signs 12–​13, 12, 20, 22; asymmetrical 13, 13; balanced vs. unbalanced 27n22; dominance condition 12, 227n4, 251, 580, 599, 605, 606, 618; hand

702

703

Index interaction features 13; symmetrical 13, 13; symmetry condition 12, 73–​74, 73, 600 uncertainty, modal markers of 492 unimodal bilinguals 59, 61, 614, 617 uniqueness 404 Universal Grammar 525 Universal Semantics 522 unmarked features 3, 12 unquotation 362 utterances: boundaries of 84; mixed 617; phonological 71; quoted 359; repeated 359; strengthening/​softening 491, 493 Variable Visibility hypothesis 500–​501, 503; and ASL 506 verb agreement 146, 381, 556; acquisition of 124–​128; agreement auxiliaries 100–​101; agreement markers 96; in ASL 22, 97, 107, 357; clitic analysis 116–​117; as cliticization of pronouns onto verbs 116; in DGS 100, 105, 106, 112, 123, 134, 147; ERP studies 132–​135; examples 96–​99; experimental perspectives 122–​135; foundations of 107–​108; generative syntactic analyses 108–​116; as gestural phenomenon 123–​124; in GSL 100; incoherent agreement violations 135; with intransitive predicates 101; in ISL 22, 102; in LSC 96, 96–​99, 98, 104, 105, 106; in LSE 109–​111; marked by head tilt 130; mimetic or spatial analogy model 125; morphological model 125–​126; non-​manual agreement 101–​102; non-​manual markers 102; by orientation 114; pragmatic 108, 111; processing of mismatches 133–​135; realization of 22; reversed agreement violations 135; spatial 126; syntactic approaches to 107–​117; tested in eye tracking studies 129–​131; tested in reaction time studies 128–​129; tested with offline methods 128–​131; tested with online methods 131–​135; thematic approaches to 102–​107; theoretical perspectives 95–​118; with transitive predicates 96; in two-​place predicates 112; unspecified agreement violations 135; verb classes and 97–​100; violations of 132–​133; see also agreement; verbs verbal classifier phrases (VCLPs) 152–​153 Verb Island Hypothesis 639 verb roots 141–​142 verbs: with accusative linearization 114; in ASL 101–​102, 114; atelic 520; backwards 114, 556; backwards agreement 98, 100, 103–​107, 111–​117, 118n8; in BSL 380; classes of 97–​101, 116; classifier 146–​147; depicting 379, 381; ditransitive constructions

115; double agreement 115; with ergative linearization 114; of existence, location, and movement (VELMs) 149; indicating 123, 380, 381; in ISL 114; in Libras 104, 106, 114–​116; in LSC 114; manner 157; modifications to 428; non-​manual agreement 101–​102; plain 146; polycomponential 142; polymorphemic 142; reversed 386; spatial 147, 379–​380, 652; telic 520; transitive 103, 556; see also classifier predicates; verb agreement verbs of transfer see verbs, ditransitive VGT (Flemish Sign Language) 482 viewpoint 557; character (CVPT) 392 viewpoint aspect 199–​200 visible event decomposition 512 visual-​gestural modality 598–​606 visual perception 19, 37 vocabulary: existing 24–​25; frozen 24–​25 VP ellipsis 296, 301–​304; in LIS 301–​302 weak hand holds 604–​606 Weak Hypothesis of Variability 506 wh-​clefts 254–​255, 531, 541, 544, 598, 601–​602, 608 wh-​doubling 234, 241–​242, 243, 619; in Libras 241–​242; in LIS 243 wh-​movement: in ASL 104, 244, 252; in FinSL 246; in LSC 246; in IPSL 246; in LIS 244–​245, 246; see also interrogatives wh-​questions 78, 79, 83, 129, 545, 546, 554, 560n9, 601, 609n4; in ASL 250–​251; in IndSL 609n4; multiple 250–​252; see also content interrogatives word order: in ABSL 648, 652; in ASL 223; asymmetry in 600; in LIS 223–​224; in NicSL 648–​649, 651–​652; patterns of 222–​225; in PJM 223; in TSL 224 working memory (WM) 664–​679; in ASL 677; in Auslan 672; in BSL 673; in LSF 671; measures of 671; representation of different tasks 678; role in sign language processing 670–​675, 677–​679; serial recall sequence 674, 674; for spatial locations 384; storage in 2; studies on sign language agreement 671–​673; task considerations 676–​677; temporal and spatial processing in 675–​676; theoretical perspectives on 664–​665; visuospatial 674–​675 wrist movements 406; see also hand movements yes-​no questions 8, 78, 80; see also interrogatives ZEI (Iranian Sign Language) 492 zero anaphora 310, 312, 313, 320 zero particles 341

703

704